The lines used to produce vram also do ssd nand flash, so they make less ssds to make more vram
brucethemoose@lemmy.world 1 day ago
Aside: WTF are they using SSDs for?
LLM inference in the cloud is basically only done in VRAM. Rarely stale K/V cache is cached in RAM, but new attention architectures should minimize that. Large scale training, contrary to popular belief, is a pretty rare event most data centers and businesses are incapable of.
…So what do they do with so much flash storage!? Is it literally just FOMO server buying?
Urga@lemmynsfw.com 1 day ago
T156@lemmy.world 1 day ago
Storage. There aren’t enough hard drives, so datacentres are also buying up SSDs.
brucethemoose@lemmy.world 1 day ago
Again, I don’t buy this. The training data isn’t actually that big, nor is training done on such a huge scale so frequently.
finitebanjo@lemmy.world 1 day ago
As we approach the theoretical error rate limit for LLMs, as proven in the 2020 research paper by OpenAI and corrected by the 2022 paper by Deepmind, the required training and power costs rise to infinity.
In addition to that, the companies might have many different nearly identical datasets to try to achieve different outcomes.
Things like books and wikipedia pages aren’t that bad, maybe a few hundred petabytes could store most of them, but images and videos are also valid training data and that’s much large, and then there is readable code. On top of that, all user inputs have to be stored to reference them again later if the chatbot offers that service.