This is the job for the OS.
You can run most Linux systems with stupid amounts of swap and the only thing you’ll notice is that stuff starts slowing down.
In my experience, only in extremely rare cases are you smarter than the OS, and in 25+ years of using Linux daily I’ve seen it exactly once, where oomkiller killed running mysqld processes, which would have been fine if the developer had used transactions. Suffice to say, they did not.
I used a 1 minute cron job to reprioritize the process, problem “solved” … for a system that hadn’t been updated for 12 years but was still live while we documented what it was doing and what was required to upgrade it.
tal@lemmy.today 1 day ago
I am guessing that
htopis in the wrong.If you look in /proc/meminfo, you’ll see a MemAvailable field and a MemTotal field.
It looks to me like this is approximately (I didn’t check
top’s source) what top is using: MemTotal - MemAvailable.stackoverflow.com/…/difference-between-memfree-an…
Looking at the htop source:
github.com/htop-dev/htop/blob/main/MemoryMeter.c
It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic.
top, on the other hand, is using the kernel’s MemAvailable directly.gitlab.com/procps-ng/procps/-/blob/…/free.c
printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));a_fancy_kiwi@lemmy.world 1 day ago
Thank you for the detailed explanation
tal@lemmy.today 1 day ago
No problem. It was an interesting question that made me curious too.
a_fancy_kiwi@lemmy.world 1 day ago
Came across some more info that you might find interesting. If true, htop is ignoring the cache used by ZFS but accounting for everything else.
link