Comment on virtualizing PFSense. What else works besides ESXi for virtual networking?
tofubl@discuss.tchncs.de 10 months agoIncus looks cool. Have you virtualised a firewall on it? Is it as flexible as proxmox in terms of hardware passthrough options?
I find zero mentions online of opnsense on incus. 🤔
TCB13@lemmy.world 10 months ago
Yes it does run, but BSD-based VMs running on Linux have their details as usual. This might be what you’re looking for: discuss.linuxcontainers.org/t/…/15799
Since you want to run a firewall/router you can ignore LXD’s networking configuration and use your opnsense to assign addresses and whatnot to your other containers. You can created whatever bridges you might want and vlans on your base system and them assign them to profiles/containers/VMs. For eg. you manually create a
cbr0
network bridge usingsystemd-network
and then runlxc profile device add default eth0 nic nictype=bridged parent=cbr0 name=eth0
this will usecbr0
as the default bridge for all machines with thedefault
profile and LXD won’t provide any addressing or touch the network, it will just create aneth0
interface on those machines attached to the bridge. Then your opnsense can be on the same bridge and do DHCP, routing etc.When you’re searching around for help, instead of “Incus” you can search for “LXD” as it tend to give you better results. Not sure if you’re aware but LXD was the original project run by Canonical, recently it was forked into Incus (and maintained by the same people who created LXD at Canonical) to keep the project open under the Linux Containers initiative.
tofubl@discuss.tchncs.de 10 months ago
With Incus only officially supported in Debian 13, and LXD on the way out, should I get going with LXD and migrate to Incus later? Or use the Zabbly repo and switch over to official Debian repos when they become available? What’s the recommended trajectory, would you say?
TCB13@lemmy.world 10 months ago
It depends on how fast you want updates. I’m sure you know how Debian works, so if you install LXD from Debian 12 repositories you’ll be on 5.0.2 LTS most likely for ever. If you install from Zabbly you’ll get the latest and greatest right now.
My companies’ machines are all running LXD from Debian repositories, except for two that run from Zabbly for testing and whatnot. At home I’m running from Debian repo. Migration from LXD 5.0.2 to a future version of Incus with Debian 13 won’t be a problem as Incus is just a fork and stgraber and other members of the Incus/LXC projects work very closely or also work in Debian.
Debian users will be fine one way or the other. I specifically asked stgraber about what’s going to happen in the future and this was his answer:
I hope this helps you decide.
tofubl@discuss.tchncs.de 10 months ago
Absolutely. Great intel; thank you!
tofubl@discuss.tchncs.de 10 months ago
OPNsense running in the Incus live demo. Fun!
Image
TCB13@lemmy.world 10 months ago
Enjoy your 30 min of Incus :P
tofubl@discuss.tchncs.de 10 months ago
I have another question, if you don’t mind: I have a debian/incus+opnsense setup now, created bridges for my NICs with systemd-networkd and attached the bridges to the VM like you described. I have the host configured with DHCP on the LAN bridge and ideally (correct me if I’m wrong, please), I’d like the host to not touch the WAN bridge at all (other than creating it and hooking it up to the NIC).
Here’s the problem: if I don’t configure the bridge on the host with either dhcp or a static IP, the opnsense VM also doesn’t receive an IP on that interface. I have a br0.netdev to set up the bridge, a br0.network to connect the bridge to the NIC, and a wan.network to assign a static IP on br0, otherwise nothing works. (While I’m working on this, I have the WAN port connected to my old LAN, if it makes a difference.)
My question is: Is my expectation wrong or my setup? Am I mistaken that the host shouldn’t be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what’s the best practice here?
Thank you for taking a look! 😊
TCB13@lemmy.world 10 months ago
I think there’s something wrong with your setup. One of my machines has a
br0
and a setup like yours.10-enp5s0.network
is the physical “WAN” interface:Now, I have a profile for “bridged” containers:
And one of my VMs with this profile:
Inside the VM the network is configured like this:
Can you check if your config is done like this? If so it should work.
tofubl@discuss.tchncs.de 10 months ago
My config was more or less identical to yours, and that removed some doubt and let me focus on the right part: Without a
wan0.network
, the host isn’t bringing upbr0
on boot. I thought it had something to do with the interface having an IP, but turns out the following works as well:Thank you once again!
tofubl@discuss.tchncs.de 10 months ago
Very informative, thank you.
I am generally very comfortable with Linux, but somehow this seems intimidating.
Although I guess I’m not using proxmox for anything other than managing VMs, network bridges and backups. Well, and for the feeling of using something that was set up by people who know what they’re doing and not hacked together by me until it worked…
TCB13@lemmy.world 10 months ago
And LXD/Incus can do that as well for you. Install it an by running
incus init
it will ask you a few questions and get an automated setup with networking, storage etc. all running and ready for you to create VMs/Containers.What I was saying is that you can also ignore the default / automated setup and install things manually if you’ve other requirements.
tofubl@discuss.tchncs.de 10 months ago
Okay, I think I found a bit of a catch with Incus or LXD. I want a solution with a web UI, and while Incus has one, it seems to have access control either browser certificate based or with a central auth server. Neither are a good solution for me - I would much prefer regular user auth with the option to use an auth server at some point (but I don’t want to take all of this on all at once.)
I hope it’s okay that I keep coming back to you with these questions. You seem to be a strong Incus-evangelist. :)