Comment on [deleted]
irmadlad@lemmy.world 1 week agobut that’s been my experience after ~25 years of using Linux daily.
Certainly, 25 years of experience speaks for itself. If I may ask a follow up question.
I run Portainer, and in Portainer you can adjust Runtime & Resources per container. I am apparently too incompetent to grasp Dockge. Currently everything in Runtime & Resources is unchanged. Is there any benefit to tweaking those settings, or just let 'em eat when hungry?
suicidaleggroll@lemm.ee 1 week ago
I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself. If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.
Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.