Comment on China's first real gaming GPU is here, and the benchmarks are brutal
DacoTaco@lemmy.world 2 days agoDebatable. The basics of an llm might not need much, but the actual models do need it to be anywhere near decent or usefull. Im talking minutes for a simple reply.
Source: ran few <=5b models on my system with ollama yesterday and gave it access to a mcp server to do stuff with
CheeseNoodle@lemmy.world 2 days ago
Yes, my whole post was that non-LLMs take far less processing power.
DacoTaco@lemmy.world 1 day ago
Oh derp, misread sorry! Now im curious though, what ai alternatives are there that are decent in processing/using a neural network?
CheeseNoodle@lemmy.world 1 day ago
So the two biggest examples I am currently aware of are googles AI for unfolding proteins and a startup using one to optimize rocket engine geometry but AI models in general can be highly efficient when focussed on niche tasks. As far as I understand it they’re still very similar in underlying function to LLMs but the approach is far less scattershot which makes them exponentially more efficient.
A good way to think of it is even the earliest versions of chat GPT or the simplest local models are all equally good at actually talking but language has a ton of secondary requirements like understanding context and remembering things and the fact that not every gramatically valid bannana is always a useful one. So an LLM has to actually be a TON of things at once while an AI designed for a specific technical task only has to be good at that one thing.
DacoTaco@lemmy.world 1 day ago
This is why i played around with mcp over the holidays. The fact its a standard to allow an ai to talk to an api is kinda cool. And nothing is stopping you from making the api do some ai call in itself.
Personally, i find the tech behind ai’s, and even llm’s, super interesting but companies are just fucking it up and pushing it way ti fucking hard and in ways its not meant to be -_-
Thanks for the info and ill have to look into those non-llm ai’s :)