Comment on Consumer hardware is no longer a priority for manufacturers
melfie@lemy.lol 2 days agoAppreciate all the info! I did find this calculator the other day, and it’s pretty clear the RTX 4060 in my server isn’t going to do much though its NVMe may help.
apxml.com/tools/vram-calculator
I’m also not sure under 10 tokens per second will be usable, though I’ve never really tried it.
I’d be hesitant to buy something just for AI that doesn’t also have RTX cores because I do a lot of Blender rendering. RDNA 5 is supposed to have more competitive RTX cores along with NPU cores, so I guess my ideal would be a SoC with a ton of RAM. Maybe when RDNA 5 releases, the RAM situation will have have blown over and we will have much better options.
brucethemoose@lemmy.world 2 days ago
That calculator is total nonsense. Don’t trust anything like that; at best, its obsolete the week after its posted.
Yeah, that’s a huge caveat. AMD Blender might be better than you think though, and you can use your RTX 4060 on a Strix Halo motherboard just fine.
So far, NPUs have been useless. Don’t buy any of that marketing.
That’s still 5 words/second. That’s not a bad reading speed.
Whether its enough. GLM 350B without thinking is smarter than most models with thinking, so I end up with better answers faster.
But anyway, I’m looking at more like 20-30 tokens a second into models that aren’t squeezed into my rig within an inch of their life. If you buy an HEDT/Server CPU with more RAM channels, it’s even faster.
melfie@lemy.lol 2 days ago
Ah, a lot of good info! Thanks, I’ll look into all of that!