QEMU/KVM Network Setup with nftables

On my Gentoo machine I use QEMU, libvirt and virt-manager to host virtual guest systems. The default network setup of libvirt relies on firewalld and iptables, neither of which are installed on my system. I had already set up my host's firewall using nftables, instead of using deprecated iptables.
Because I didn’t want to be forced to go back to iptables or install another firewall management software (besides nft) just for a bunch of VMs, I set up my network manually.
I want some VMs to be able to connect to the internet and some VMs should only be able to talk to each other. Additionally, I wanted to be able to monitor traffic with Wireshark and be able to set up separate nftables filtering rules.
For this, I created separate virtual bridges with their own DNS- and DHCP-servers and nftables NAT to use my host’s network interface.
Setup script
#!/bin/bash
# Check if root
if [[ "$EUID" > 0 ]]; then
echo "You must be root."
exit
fi
sysctl net.ipv4.ip_forward=1
VM_BRIDGE_NAME=vm-bridge
echo "Creating $VM_BRIDGE_NAME..."
# Create bridge device
ip link add name $VM_BRIDGE_NAME type bridge
# Set state up
ip link set $VM_BRIDGE_NAME up
# Associate IP address to bridge device
ip addr add 192.168.2.1/24 brd + dev $VM_BRIDGE_NAME
# Delete the default route
ip route delete 192.168.2.0/24
# Add route
ip route add 192.168.2.0/24 dev $VM_BRIDGE_NAME proto dhcp
echo "Creating isolated net..."
# Create bridge device
ip link add name isolated-bridge type bridge
# Set state up
ip link set isolated-bridge up
# Associate IP address to bridge device
ip addr add 192.168.3.1/24 brd + dev isolated-bridge
# Delete the default route
ip route delete 192.168.3.0/24
# Add route
ip route add 192.168.3.0/24 dev isolated-bridge proto dhcp
nftables configuration
flush ruleset
table ip nat {
chain postrouting {
type nat hook postrouting priority 100;
policy accept;
ip saddr 192.168.2.0/24 masquerade
}
}
table inet filter {
chain input {
...
iifname "vm-bridge" udp dport 53 accept # DNS
iifname "vm-bridge" udp dport 67 accept # DHCP
iifname "vm-bridge" udp dport 68 accept # DHCP
iifname "vm-bridge" udp dport 5353 accept # MDNS
...
}
chain forward {
type filter hook priority filter;
policy drop;
...
iifname "vm-bridge" tcp dport {http,https} accept
iifname "vm-bridge" udp dport 53 accept
iifname "vm-bridge" udp dport 67 accept
iifname "vm-bridge" udp dport 68 accept
iifname "vm-bridge" udp dport 5353 accept
oifname "vm-bridge" tcp sport {http,https} accept
oifname "vm-bridge" udp sport 53 accept
oifname "vm-bridge" udp sport 67 accept
oifname "vm-bridge" udp sport 68 accept
oifname "vm-bridge" udp sport 5353 accept
...
}
chain output {
...
}
}
After the routing is set up, the DNS and DHCP servers can be started. I use the following configurations:
dnsmasq configuration for vm-bridge
interface=vm-bridge
bind-dynamic
no-poll
no-resolv
dhcp-range=192.168.2.2,192.168.2.255
server=127.0.0.1#8934
dnsmasq configuration for isolated-bridge
interface=isolated-bridge
bind-dynamic
no-poll
no-resolv
dhcp-range=192.168.3.2,192.168.3.255