Yes, that’s how it’s supposedto work if they’re all on the same Docker network (same yaml). In practice, it can be flaky and you’re much better off using ip:port.
Comment on Need help: accessing all my containers by name
jrbaconcheese@yall.theatl.social 11 months ago[deleted]
CalicoJack@lemmy.dbzer0.com 11 months ago
i_am_not_a_robot@discuss.tchncs.de 11 months ago
It might work if you put them on the same Docker network? I use Kubernetes and it definitely has this feature.
emax_gomax@lemmy.world 11 months ago
In general yes. You can think of each container in a docker network as a host and docker makes these hosts discoverable to each other. Docker also supports some other network types that may not follow this concept if you configure them as such (for example if you force all containers to use the same networking stack as one container (I do this with gluetun so I can run everything in a vpn) all services will be reachable only from the gluetun host instead of individual service hosts).
Furthermore services in a container are not exposed outside of it by default. You must explicitly state when a port in a container is reachable by your host (the ports: option).
But getting back to the question at hand, what you’re looking for is a reverse proxy. It’s a program that accepts requests from multiple requested and forwards them somewhere else. So you connect to the proxy and it can tell based on how you connect (the url) whether to send the request to sonarr or radarr. sonarr.localhost and radarr.localhost will both route to your proxy and the proxy will pass them to the respective services based on how you configure it. For this you can use nginx, but I’d recommend caddy as it’s what I’m using and it makes setting up things like this such a breeze.