Comment on Opera is testing letting you download LLMs for local use, a first for a major browser

<- View Parent
T156@lemmy.world ⁨1⁩ ⁨month⁩ ago

Not exactly. Most integrated chips have a small pool of dedicated VRAM, and then a bit more that they share with the system memory. It’s only Apple’s unified memory, and maybe other mobile chips that has them both share a memory pool, for better or worse, as far as I’m aware.

But it is worth noting that if you don’t have enough VRAM and have to put it into RAM, the minimum expectation is that you have twice the amount of RAM space. So if you have a GPU with 4GB of VRAM, and need to offload the extra to the system, you don’t need 16 GB, you need 32 GB.

source
Sort:hotnewtop