I recently noticed that htop displays a much lower ‘memory in use’ number than free -h, top, or fastfetch on my server.
I am using ZFS on this server and I’ve read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn’t show caching used by the kernel but I’m not sure how to confirm ZFS is what’s causing the discrepancy.
I’m also running a bunch of docker containers and am concerned about stability since I don’t know what number I should be looking at. I don’t want apps closing or some other issues popping up. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I’m using. Is htop the better metric to use or are the other tools better?
Server Memory Usage:
- htop =
8.35G / 30.6G - free -h =
total used free shared buff/cache available Mem: 30Gi 26Gi 1.3Gi 730Mi 4.2Gi 4.0Gi
- top =
MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache - fastfetch =
26.54GiB / 30.6GiB
vk6flab@lemmy.radio 1 day ago
Linux aggressively caches things.
4 GB of RAM is not running out of memory.
If you start using swap, you’re running into a situation where you might run out of memory.
If
oomkillerstarts killing processes, then you’re running out of memory.tal@lemmy.today 1 day ago
Well, you could want to not dig into swap.
a_fancy_kiwi@lemmy.world 1 day ago
That’s pretty much where I’m at on this. As far as I’m concerned, if my system touches SWAP at all, it’s run out of memory. At this point, I’m hoping to figure out what percent of the memory in use is unimportant cache that can be closed vs important files that process need to function.
a_fancy_kiwi@lemmy.world 1 day ago
Is there a good way to tell what percent of RAM in use is used by less important caching of files that could be closed without any adverse effects vs files that if closed, the whole app stops functioning?
Basically, I’m hoping htop isn’t broken and is reporting I have 8GB of important showstopping files open and everything else is cache that is unimportant/closable without the need to touch SWAP.
tal@lemmy.today 1 day ago
I am guessing that
htopis in the wrong.If you look in /proc/meminfo, you’ll see a MemAvailable field and a MemTotal field.
It looks to me like this is approximately (I didn’t check
top’s source) what top is using: MemTotal - MemAvailable.stackoverflow.com/…/difference-between-memfree-an…
Looking at the htop source:
github.com/htop-dev/htop/blob/main/MemoryMeter.c
It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic.
top, on the other hand, is using the kernel’s MemAvailable directly.gitlab.com/procps-ng/procps/-/blob/…/free.c
printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));vk6flab@lemmy.radio 1 day ago
This is the job for the OS.
You can run most Linux systems with stupid amounts of swap and the only thing you’ll notice is that stuff starts slowing down.
In my experience, only in extremely rare cases are you smarter than the OS, and in 25+ years of using Linux daily I’ve seen it exactly once, where
oomkillerkilled runningmysqldprocesses, which would have been fine if the developer had used transactions. Suffice to say, they did not.I used a 1 minute cron job to reprioritize the process, problem “solved” … for a system that hadn’t been updated for 12 years but was still live while we documented what it was doing and what was required to upgrade it.