KaninchenSpeed
@KaninchenSpeed@lemmy.blahaj.zone
- Comment on virtualizing OPNsense is....not going great 2 days ago:
The performance drop from virtualizing nics shouldn’t be nearly as big. How are you passing the vlans to the VM? are you passing all over one virtio nic or one virtio nic for each.
The setup I ran for multiple years was basicly a bridge interface on the host for each vlan and a seperate virtio nic to the opnsense VM for each, I got almost 10 gbit/s like that with 8gigs of ram for opnsense and 4 or 8 cores (I cant remember) with hyperthreading of a 2nd gen epyc.
- Comment on [deleted] 1 week ago:
If you already have/can run a local server, then maybe storing the luks passphrase there and running a script on it which sshs into the remote server end enters the stored passphrase on command. Maybe a simple http server triggers it, which you could auth using forward auth of your reverse proxy, so you wouldnt need to implement auth in your script.
Of cause the passphrase is stored in plain text, but that will be the case in any case not using a tpm.
- Comment on Route outgoing traffic of a docker bridge network through VPN 3 weeks ago:
I’ve never used network manager on a server and don’t understand your routing configuration, im assuming you have wg0 configured to have a default route (ip route list).
You should be able to connect a docker network to the vpn by using a macvlan insted of a bridge type network and set the parent interface of it to the wg0 interface.
docker network create -d macvlan \ --subnet=<internal vpn network>/24 \ --gateway=<gateway ip> \ -o parent=wg0 vpn-netmodified from the docker documentation
Make sure the allowed ips in the wireguard configs are set correctly.
You can also do ipv6 like this, see the end of the linked documentation page.