Comment on AI Is Starting to Look Like the Dot Com Bubble
pexavc@lemmy.world 1 year agoOh wow, that’s good to know. I always attributed visual graphics to be way more intensive. Wouldn’t think a text generative model to take up that much Vram
Comment on AI Is Starting to Look Like the Dot Com Bubble
pexavc@lemmy.world 1 year agoOh wow, that’s good to know. I always attributed visual graphics to be way more intensive. Wouldn’t think a text generative model to take up that much Vram
ramblinguy@sh.itjust.works 1 year ago
Sorry, just seeing this now- I think with 24gb of vram, the most you can get is a 4bit quantized 30b model, and even then, I think you’d have to limit it to 2-3k of context. Here’s a chart for size comparisons: postimg.cc/4mxcM3kX
By comparison, with 24gb of vram, I only use half of that to create a batch of 8 768x576 photos. I also sub to mage.space, and I’m pretty sure they’re able to handle all of their volume on an A100 and A10G