Comment on Need help routing Wireguard container traffic through Gluetun container
scapeg0at@midwest.social 5 months agoThank you for the reply! I’ve been busy the last couple of days so I just got around to looking back at this.
I tested out your advice and setup a wireguard container with the MASQUERADE
NAT rule and it worked! However, when I tried it out again with the gluetun container. I’m still running into issues, but there is progress!
With my setup before when I connect my client to the wireguard network I would get a “no network” error. Now when I try access the internet the connection times out. Still not ideal, but at least it’s a different error than before!
With the MASQUERADE
NAT rule in place, running tcpdump
on the docker network shows that at least the two containers are talking to each other:
17:04:29.927415 IP 172.22.0.2 > 172.22.0.100: ICMP echo request, id 4, seq 9823, length 64 17:04:29.927466 IP 172.22.0.100 > 172.22.0.2: ICMP echo reply, id 4, seq 9823, length 64
but I still cannot get any internet access through the wireguard tunnel.
When exploring around the gluetun config I confirmed that the MASQUERADE
rule was actually set:
Chain PREROUTING (policy ACCEPT 2933 packets, 316K bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 839 packets, 86643 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 12235 packets, 741K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 11408 packets, 687K bytes) pkts bytes target prot opt in out source destination 2921 284K MASQUERADE 0 -- * eth+ 0.0.0.0/0 0.0.0.0/0
I think that the issue may be the default firewall rules of the gluetun block all traffic besides the VPN traffic via the main iptable:
Chain INPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 2236 164K ACCEPT 0 -- lo * 0.0.0.0/0 0.0.0.0/0 11914 12M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 87 15792 ACCEPT 0 -- eth0 * 0.0.0.0/0 172.22.0.0/24 Chain FORWARD (policy DROP 381 packets, 22780 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 76 packets, 5396 bytes) pkts bytes target prot opt in out source destination 2236 164K ACCEPT 0 -- * lo 0.0.0.0/0 0.0.0.0/0 8152 872K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 ACCEPT 0 -- * eth0 172.22.0.100 172.22.0.0/24 1 176 ACCEPT 17 -- * eth0 0.0.0.0/0 213.152.187.229 udp dpt:1637 212 12843 ACCEPT 0 -- * tun0 0.0.0.0/0 0.0.0.0/0
I tried adding simple iptables rules such as iptables -A FORWARD -i tun+ -j ACCEPT
(and the same with eth+
as the interface) but with no luck.
If you think you can help I’ll be down to try out other solutions, or if you need more information I can post it when I have time. If you don’t think this will be an easy fix I can revert back to the wireguard-wireguard container setup since that worked. I tried to get this setup working so I could leverage the gluetun kill-switch/restart.
CumBroth@discuss.tchncs.de 5 months ago
I think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since you lose client connectivity (except to the local network, since there’s a separate route for that) if any of the following happens:
I can’t be 100% sure, because I’m not a networking expert, but this seems like enough of a “kill-switch” to me. I’m not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself gets restarted/updated.
But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest version of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:
First, here’s the docker compose setup I used:
You already have your “server” container properly configured. Now for Gluetun: I exec into the container
docker exec -it gluetun sh
. Then I set the MASQUERADE rule on the tunnel:iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE
. And finally, I change the FORWARD chain policy in the filter table to ACCEPTiptables -t filter -P FORWARD ACCEPT
.Note on the last command: In my case I did
iptables-legacy
because all the rules were defined there already (iptables
gives you a warning if that’s the case), but your container’s version may vary. I saw different behavior/setup on the testing container I spun up on the VPS compared to the one I have running on my homelab.Good luck, and let me know if you run into any issues!
scapeg0at@midwest.social 5 months ago
I tried out your solution and it worked! I thought it was an iptables / firewall issue on the gluetun end but I didn’t know which table the packets were going through.
There’s also a way to set persistent iptable rules via gluetun, the docs are here
Thank you for your help! I’ll clean up my configs, and post my working configs and setup process!