Comment on Consumer hardware is no longer a priority for manufacturers

<- View Parent
brucethemoose@lemmy.world ⁨2⁩ ⁨days⁩ ago

I did find this calculator the other day

That calculator is total nonsense. Don’t trust anything like that; at best, its obsolete the week after its posted.

I’d be hesitant to buy something just for AI that doesn’t also have RTX cores because I do a lot of Blender rendering. RDNA 5 is supposed to have more competitive RTX cores

Yeah, that’s a huge caveat. AMD Blender might be better than you think though, and you can use your RTX 4060 on a Strix Halo motherboard just fine.

along with NPU cores, so I guess my ideal would be a SoC with a ton of RAM

So far, NPUs have been useless. Don’t buy any of that marketing.

I’m also not sure under 10 tokens per second will be usable, though I’ve never really tried it.

That’s still 5 words/second. That’s not a bad reading speed.

Whether its enough. GLM 350B without thinking is smarter than most models with thinking, so I end up with better answers faster.

But anyway, I’m looking at more like 20-30 tokens a second into models that aren’t squeezed into my rig within an inch of their life. If you buy an HEDT/Server CPU with more RAM channels, it’s even faster.

source
Sort:hotnewtop