Comment on Need help routing Wireguard container traffic through Gluetun container
CumBroth@discuss.tchncs.de 5 months agoI think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since you lose client connectivity (except to the local network, since there’s a separate route for that) if any of the following happens:
- “Client” container is spun down
- The Wireguard interface inside the “client” container is spun down (you can try this out by execing “wg-quick down wg0” inside the container)
- or even if the interface is up but the VPN connection is down (try changing the endpoint IP to a random one instead of the correct one provided by your VPN service provider)
I can’t be 100% sure, because I’m not a networking expert, but this seems like enough of a “kill-switch” to me. I’m not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself gets restarted/updated.
But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest version of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:
- A MASQUERADE rule on the tunnel, meaning the tun0 interface.
- Gluetun is configured to drop all FORWARD packets (filter table) by default. You’ll have to change that chain rule to ACCEPT. Again, I’m not a network expert, so I’m not sure whether or not this compromises the kill-switch in any way, at least in any relevant way to the desired setup.
First, here’s the docker compose setup I used:
networks: wghomenet: name: wghomenet ipam: config: - subnet: 172.22.0.0/24 gateway: 172.22.0.1 services: gluetun: image: qmcgaw/gluetun container_name: gluetun cap_add: - NET_ADMIN devices: - /dev/net/tun:/dev/net/tun ports: - 8888:8888/tcp # HTTP proxy - 8388:8388/tcp # Shadowsocks - 8388:8388/udp # Shadowsocks volumes: - ./config:/gluetun environment: - VPN_SERVICE_PROVIDER=<your stuff here> - VPN_TYPE=wireguard # - WIREGUARD_PRIVATE_KEY=<your stuff here> # - WIREGUARD_PRESHARED_KEY=<your stuff here> # - WIREGUARD_ADDRESSES=<your stuff here> # - SERVER_COUNTRIES=<your stuff here> # Timezone for accurate log times - TZ= <your stuff here> # Server list updater # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list - UPDATER_PERIOD=24h sysctls: - net.ipv4.conf.all.src_valid_mark=1 networks: wghomenet: ipv4_address: 172.22.0.101 wireguard-server: image: lscr.io/linuxserver/wireguard container_name: wireguard-server cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=1001 - TZ=<your stuff here> - INTERNAL_SUBNET=10.13.13.0 - PEERS=chromebook volumes: - ./config/wg-server:/config - /lib/modules:/lib/modules #optional restart: always ports: - 51820:51820/udp networks: wghomenet: ipv4_address: 172.22.0.5 sysctls: - net.ipv4.conf.all.src_valid_mark=1
You already have your “server” container properly configured. Now for Gluetun:
I exec into the container docker exec -it gluetun sh
.
Then I set the MASQUERADE rule on the tunnel: iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE
.
And finally, I change the FORWARD chain policy in the filter table to ACCEPT iptables -t filter -P FORWARD ACCEPT
.
Note on the last command: In my case I did iptables-legacy
because all the rules were defined there already (iptables
gives you a warning if that’s the case), but your container’s version may vary. I saw different behavior/setup on the testing container I spun up on the VPS compared to the one I have running on my homelab.
Good luck, and let me know if you run into any issues!
scapeg0at@midwest.social 5 months ago
I tried out your solution and it worked! I thought it was an iptables / firewall issue on the gluetun end but I didn’t know which table the packets were going through.
There’s also a way to set persistent iptable rules via gluetun, the docs are here
Thank you for your help! I’ll clean up my configs, and post my working configs and setup process!