Comment on Opera is testing letting you download LLMs for local use, a first for a major browser
Bandicoot_Academic@lemmy.one 7 months agoMost people probably don’t have a dedicated GPU and an iGPU is probably not powerfull enough to run an LLM at decent speed. Also a decent model requires like 20GB of RAM which most people don’t have.
douglasg14b@lemmy.world 7 months ago
It doesn’t just require 20GB of RAM, it requires that in VRAM. Which is a much higher barrier to entry.
Hamartiogonic@sopuli.xyz 7 months ago
But what if you have an AMD APU. Doesn’t that use your normal RAM as VRAM?
T156@lemmy.world 7 months ago
Not exactly. Most integrated chips have a small pool of dedicated VRAM, and then a bit more that they share with the system memory. It’s only Apple’s unified memory, and maybe other mobile chips that has them both share a memory pool, for better or worse, as far as I’m aware.
But it is worth noting that if you don’t have enough VRAM and have to put it into RAM, the minimum expectation is that you have twice the amount of RAM space. So if you have a GPU with 4GB of VRAM, and need to offload the extra to the system, you don’t need 16 GB, you need 32 GB.