Comment on [deleted]
suicidaleggroll@lemm.ee 1 week ago
People always say to let the system manage memory and don’t interfere with it as it’ll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that’s been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it’s decided to move some of its memory into swap, and responsiveness doesn’t pick up until I manually empty the swap so it’s operating fully out of RAM again.
So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again.
irmadlad@lemmy.world 1 week ago
Certainly, 25 years of experience speaks for itself. If I may ask a follow up question.
I run Portainer, and in Portainer you can adjust Runtime & Resources per container. I am apparently too incompetent to grasp Dockge. Currently everything in Runtime & Resources is unchanged. Is there any benefit to tweaking those settings, or just let 'em eat when hungry?
suicidaleggroll@lemm.ee 1 week ago
I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself. If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.
Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.