Comment on China's first real gaming GPU is here, and the benchmarks are brutal
turkalino@sh.itjust.works 2 days agoNvidia is busy trying to kill their consumer GPU division to free up more fab space for data center GPUs chasing that AI bubble
Which seems wildly shortsighted, like surely the AI space is going to find some kind of more specialized hardware soon, sort of like how crypto moved to ASICs. But I guess bubbles are shortsighted…
CheeseNoodle@lemmy.world 2 days ago
The crazy part is outside LLMs the other (actually useful) AI does not need that much processing power, more than you or I use sure but notthing that would have justified gigantic data centers.
DacoTaco@lemmy.world 2 days ago
Debatable. The basics of an llm might not need much, but the actual models do need it to be anywhere near decent or usefull. Im talking minutes for a simple reply.
Source: ran few <=5b models on my system with ollama yesterday and gave it access to a mcp server to do stuff with
CheeseNoodle@lemmy.world 2 days ago
Yes, my whole post was that non-LLMs take far less processing power.
DacoTaco@lemmy.world 2 days ago
Oh derp, misread sorry! Now im curious though, what ai alternatives are there that are decent in processing/using a neural network?