Proxmox with Opnsense as Firewall/Gw - Routing Issue

Proxmox with OPNsense as Firewall/GW - routing issue

Some direly needed rough basics first:

  • There's routing, which is IP's and packets on layer3.
  • There's switching, which is MAC's and frames on layer2.

Further you speak of vmbr0/1/30, but only 0 and 30 are shown in your config.
Shorewall does not matter for your vm connectivity (iptables is layer3, ebtables would be layer2 for contrast, but your frames should just fly by the shorewall, not getting to the HV but instead going to the VM's directly. shorewall is just a frontend using iptables in the background).

With that out of the way:

Usually you don't need any routing on the proxmox BRIDGES. A bridge is a switch, as far as you are concerned. vmbr0 is a virtual external bridge which you linked with eth0 (thus created an in-kernel link between a physical nic and your virtual interface, to get packets to flow at all). The bridge could also run without an IP attached to it at all. But to have the HV accessible, usually an external IP is attached to it. Otherwise you'd have to setup your firewall gateway plus a VPN tunnel, give vmbr30 an internal ip, and then you could access the internal IP of the HV from the internet after establishing a tunnel connnection, but that's just for illustration purposes for now.

Your ipsec connectivity issue sounds an awful lot like a misconfigured VPN, but also mobile IPSEC is just often a pain in the butt to work with due to protocol implementation differences, openvpn works a LOT better, but you should know your basics about PKI and certificates to implement that. Plus if opnsense is as counter-intuitive as pfsense when it comes to openvpns, you are possibly in for a week of stabbing at the dark easily. For pfsense there's a installable openvpn config export package which makes life quite easier, don't know wether this one is available for opnsense, too.

It does not so much look like what you call asynchronous routing but rather like a firewall issue you had, concerning the first picture.
For your tunnel firewall (interface IPSEC or interface openvpn on opnsense, depending on the tunnel you happen to use) just leave it at ipv4 any:any to any:any, you should only get into the LAN net anyway by the definition of the tunnel itself, opnsense will automatically send the packets out from the LAN interface only, on the second picture.

net.ipv4.ip_forward = 1 = enable routing in the kernel at all on the linux OS's interfaces where you activated it. You can do NAT-ting stuff via iptables, thus making it possible for getting into your LAN by using your external HV IP on vmbr0 in theory, but that's not something you should happen to achieve by accident, you might be able to disable forwarding again without loosing connectivity. At least to the HV, I am unsure about is your extra routes for the other external IPs, but these should be configurable the same way from within the opnsense directly (create the point-to-point links there, the frames will transparently flow through vmbr0 and eth0 to the hetzner gateway) and work, which would be cleaner.

Also you should not make the rancher-VM accessible externally directly and thus bypassing your firewall, I doubt this is what you want to achieve. Rather put the external ip onto the opnsense (as virtual ip of type ip alias), set up 1:1 NAT from IP3 to the internal ip of the rancher-vm, and do the firewalling via opnsense.

Some ascii art how things possibly should look from what I can discern from your information so far, for the sake of brevity only interfaces are used, no distinction is made between physical/virtual servers, and no point-to-point links are shown.

[hetzner gw]
|
|
|
[eth0] (everything below happens inside your server)
||
|| (double lines here to hint at the physical-virtual linking linux does)
|| (which is achieved by linking eth0 to vmbr0)
||
|| +-- HV acess to via IP1 -- shorewall/iptables for hv firewalling
|| |
[vmbr0] [vmbr30]
IP1 | | |
| | | |
| | | |
[wan nic opn] | | |
IP2 on wan directly, | | |
IP3 as virtual IP of type ip alias | | |
x | | |
x (here opn does routing/NAT/tunnel things) | | |
x | | |
x set up 1:1 NAT IP3-to-10.1.7.11 in opn for full access | | |
x set up single port forwardings for the 2nd vm if needed | | |
x | | |
[lan nic opn]-----------------------------------------------+ | |
10.1.7.1 | |
| |
+----------+ |
| |
[vm1 eth0] [vm2 eth0]
10.1.7.11 10.1.7.151

If you want to firewall the HV via opnsense, too, these would be the steps to do so while maintainting connectivity:

  1. remove IP1 from [vmbr0]
  2. put it on [wan nic opn]
  3. put internal ip (IP_INT) from opn lan onto [vmbr30]
  4. set up 1:1 NAT from IP
  5. set all firewall rules
  6. swear like hell when you break the firewall and cannot reach the hv anymore
  7. see wether you can access the HV via an iKVM solution hoping to get a public IP onto it, so you can use the console window in proxmox hoping to fix or reinstall the firewall.

Proxmox with OPNsense as pci-passthrough setup used as Firewall/Router/IPsec/PrivateLAN/MultipleExtIPs

General high level perspective

Sample Image

Adding the pci-passthrough

A bit out of scope, but what you will need is

  • a serial console/LARA to the proxmox host.
  • a working LAN connection from opnsense (in my case vmbr30) to proxmox private ( 10.1.7.2 ) and vice versa. You will need this when you only have the tty console and need to reconfigure the opnsense intefaces to add em0 as the new WAN device
  • You might have a working IPsec connection before or opened WAN ssh/gui for further configuration of opnsense after the passthrough

In general its this guide - in short

vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

update-grub

vi /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Then reboot and ensure you have a iommu table

find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0

Now find your network card

lspci -nn

in my case

00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)

After this command, you detach eth0 from proxmox and lose network connection. Ensure you have a tty! Please replace "8086 15b7" and 00:1f.6 with your pci-slot ( see above)

echo "8086 15b7" > /sys/bus/pci/drivers/pci-stub/new_id && echo 0000:00:1f.6 > /sys/bus/pci/devices/0000:00:1f.6/driver/unbind && echo 0000:00:1f.6 > /sys/bus/pci/drivers/pci-stub/bind

Now edit your VM and add the PCI network card:

vim /etc/pve/qemu-server/100.conf

and add ( replace 00:1f.6)

machine: q35
hostpci0: 00:1f.6

Boot opnsense connect using ssh root@10.1.7.1 from your tty proxmox host, edit the interfaces, add em0 as your WAN interface and set it on DHCP - reboot your opnsense instance and it should be up again.

add a serial console to your opnsense

In case you need a fast disaster recovery or your opnsense instance is borked, a CLI based serial is very handy, especially if you connect using LARA/iLO whatever.

Do get this done, add

vim /etc/pve/qemu-server/100.conf

and add

serial0: socket

Now in your opnsense instance

vim /conf/config.xml

and add / change this

<secondaryconsole>serial</secondaryconsole>
<serialspeed>9600</serialspeed>

Be sure you replace the current serialspeed with 9600. No reboot your opnsense vm and then

qm terminal 100

Press Enter again and you should see the login prompt

hint: you can also set your primaryconsole to serial, helps you get into boot prompts and more and debug that.

more on this under https://pve.proxmox.com/wiki/Serial_Terminal

Network interfaces on Proxmox

auto vmbr30
iface vmbr30 inet static
address 10.1.7.2
address 10.1.7.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up sleep 2
metric 1

OPNsense

  1. WAN is External-IP1, attached em0 (eth0 pci-passthrough), DHCP
  2. LAN is 10.1.7.1, attached to vmbr30

Multi IP Setup

Yet, i only cover the ExtraIP part, not the extra Subnet-Part. To be able to use the extra IPs, you have to disable seperate MACs for each ip in the robot - so all extra IPs have the same MAC ( IP1,IP2,IP3 )

Then, in OPN, for each extern IP you add a Virtual IP in Firewall-VirtualIPs(For every Extra IP, not the Main IP you bound WAN to). Give each Virtual IP a good description, since it will be in the select box later.

Now you can go to either Firewall->NAT->Forward, for each port

  • Destination: The ExtIP you want to forward from (IP2/IP3)
  • Dest port rang: your ports to forward, like ssh
  • Redirect target IP: your LAN VM/IP to map on, like 10.1.7.52
  • Set the redirect port, like ssh

Now you have two options, the first one considered the better, but could be more maintenance.

For every domain you access the IP2/IP3 services with, you should define local DNS "overrides" mapping on the actually private IP. This will ensure that you can communicate from the inner to your services and avoids the issues you would have since you used NATing before.

Otherwise you need to care about NAT reflection - otherwise your LAN boxes will not be able to access the external IP2/IP3, which can lead to issues in Web applications at least. Do this setup and activate outbound rules and NAT reflection:

Sample Image

What is working:

  • OPN can route a]5]5ccess the internet and has the right IP on WAN
  • OPN can access any client in the LAN ( VMPRIV.151 and VMEXT.11 and PROXMOX.2)
  • i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
  • i can access 10.1.7.1 ( opnsense ) while connected with IPsec
  • i can access VMEXT using the IPsec client
  • i can forward ports or 1:1NAT from the extra IP2/IP3 to specific private VMs

Bottom Line

This setup works out a lot better then the alternative with the bridged mode i described. There is no more async-routing anymore, there is no need for a shorewall on proxmox, no need for a complex bridge setup on proxmox and it performs a lot better since we can use checksum offloding again.

Downsides

Disaster recovery

For disaster recovery, you need some more skills and tools. You need a LARA/iPO serial console the the proxmox hv ( since you have no internet connection ) and you will need to configure you opnsense instance to allow serial consoles as mentioned here, so you can access opnsense while you have no VNC connection at all and now SSH connection either ( even from local LAN, since network could be broken ). It works fairly well, but it needs to be trained once to be as fast as the alternatives

Cluster

As far as i can see, this setup is not able to be used in a cluster proxmox env. You can setup a cluster initially, i did by using a tinc-switch setup locally on the proxmox hv using Seperate Cluster Network. Setup the first is easy, no interruption. The second join needs to already taken into LARA/iPO mode since you need to shutdown and remove the VMs for the join ( so the gateway will be down ). You can do so by temporary using the eth0 NIC for internet. But after you joined, moved your VMs in again, you will not be able to start the VMs ( and thus the gateway will not be started). You cannot start the VMS, since you have no quorum - and you have no quorum since you have no internet to join the cluster. So finally a hen-egg issue i cannot see to be overcome. If that should be handled, only by actually a KVM not being part of the proxmox VMs, but rather standalone qemu - not desired by me right now.

proxmox KVM routed network with multiple public IPs

ok this how I solved the problem
you need to specific The IPs you want to use them with WINDOWS SERVER using "up route add -host" and other IPs can be used directly with Linux using create container....installing Linux manual not worked with me
this is my /etc/network/interfaces file

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 88.198.60.125
netmask 255.255.255.255
pointopoint 88.198.60.97
gateway 88.198.60.97
post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

auto vmbr0
iface vmbr0 inet static
address 88.198.60.125
netmask 255.255.255.255
bridge_ports none
bridge_stp off
bridge_fd 0
bridge_maxwait 0

#subnet ips used with windows server only
up route add -host 46.4.214.80 dev vmbr0
up route add -host 46.4.214.81 dev vmbr0
up route add -host 46.4.214.82 dev vmbr0
up route add -host 46.4.214.87 dev vmbr0

Setup proxmox internal network

hello recommend running 2 nic eth0 and eth1
eth0 public
eth1 private manual ip config , like 192.168.1.xxx
use public to login to proxmox console from world wide and private to developer environment



Related Topics



Leave a reply



Submit