Thank you! I’ll have a look into it
Comment on known proxy for jellyfin in container?
chiisana@lemmy.chiisana.net 7 months ago
Last time this was asked, I’ve voiced the concern that tying fixed IP address to container definitions is an anti-pattern, and I’ll voice that again. You shouldn’t be defining a fixed IP address to individual services as that prevents future scaling.
Instead, you should leverage service discover mechanisms to help your services identify each other and wire up that way.
It seemed like in NPM, there is no fitting mechanisms out of the box. Which may suggest your use case is out growing what it may be able to service you for in the future. However, docker compose stacks may rescue the current implementation with DNS resolution. Try simplifying your npm’s docker compose to just this:
networks: - npm networks: npm: name: npm_default external: true
And your jellyfin compose with something like:
networks: - npm - jellyfin_net networks: npm: name: npm_default external: true jellyfin_net: name: jellyfin_net internal: true
Have your other services in Jellyfin stack stay only on jellyfin_net or whatever you define it to be, so they’re not exposed to npm/other services. Then in the configs, have your npm talk direct to the name of your jellyfin service using hostname, maybe something like jellyfin
or whatever you’ve set as the service name. You may need to include the compose stack as prefix, too. This should then allow your npm to talk to your jellyfin via the docker compose networks’ DNS directly.
Good luck!
barbara@lemmy.ml 7 months ago
steersman2484@sh.itjust.works 7 months ago
I agree on your take, but I don’t think that “future scaling” is a concern for the most home users.
chiisana@lemmy.chiisana.net 7 months ago
It may not affect this current use case for a home media server, but people should still be aware of it so as they learn and grow, they don’t paint themselves in a corner by knowing only the anti patterns as the path forward.
jake_jake_jake_@lemmy.world 7 months ago
as someone who does stuff in my lab that can translate to a work context, i absolutely second this opinion.
if i am labbing to learn, then learning the best way to do it is always be the main focus, even if it means restarting what I was doing to change how some prerequisite is setup or functions.
today, OP is working with jellyfin, but as an example, what happens if later they get security cameras and want to use some sort of local ML to analyze events, and don’t want to put a lot cpu utilization to that task during lulls in activity? a solution might be to dynamically create and destroy containers for the analysis tasks, and the background on a network setup in an unrelated container stack that would allow scaling that means one less problem to solve later.
Lifebandit666@feddit.uk 7 months ago
I’m glad you commented as I didn’t know I can define 2 networks in Docker. At the moment I’m trying to get Arr working in docker and it was going well until I realised my containers can’t communicate with Plex. I believe it’s because I’m using Gluetun and I haven’t enabled LAN networking on my VPN. but theoretically the apps that need to see Plex don’t need to be behind the VPN, but they didn’t work when they weren’t because they couldn’t talk to Prowlarr.
So theoretically I could just slap “bridge” in my network as well, and then they’ll be in Gluetun and out of it at the same time.
I may try it tomorrow. Thanks for your comment