Eh. 8GB is unified memory, meaning it also needs to carry the graphics load. You’re making it sound like it is just working memory. MacOS is also more graphics heavy than PC, especially Linux based OS, so whatever efficiency you’ll get from the OS in terms of memory compression and management, you’ll also have to offer for the smooth expose, missing control and all the frosted glass translucent garbage they force on the users.
8GB is shit low. Email and browsing, ok. But as soon as you have 40 tabs open in chrome, it will be email or browsing. Garageband sure, again dont run anything else in the background. But I doubt you’ll even be able to edit a 1080p project in iMovie without stutter on battery power. The biggest issue is that you can’t upgrade it, so whatever software upgrades happen, 8GB is all you’ll ever get.
djdarren@piefed.social 5 hours ago
I have an 8GB M1 mini in service as my Home Assistant server. 4GB to UTM to run HAOS, the rest for macOS and Ollama running a small LLM for speech to text. I’m genuinely amazed that it hasn’t fallen over. Tried the same thing in Asahi but without macOS’ memory management and access to GPU acceleration, it just wasn’t feasible.
partial_accumen@lemmy.world 14 minutes ago
Thank you for sharing this result. I knew Asahi’s memory management wasn’t as robust (so I got a 24GB RAM M2 unit to overcome this).
For your macOS Ollama implementation are you able to leverage the NPU in the hardware (which I know is also unavailable so far in Asahi)?