Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)
Not to help the AI companies, but why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff? It’s obvious they’re shit at it, why do they answer anyway? It’s because they’re programmed by know-it-all programmers, isn’t it.
rebelsimile@sh.itjust.works 22 hours ago
MrJgyFly@lemmy.world 21 hours ago
Or they keep telling you that you just have to wait it out. It’s going to get better and better!
fmstrat@lemmy.nowsci.com 11 hours ago
This is where MCP comes in. It’s a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.
PixelatedSaturn@lemmy.world 22 hours ago
…or a simple counter to count the r in strawberry. Because that’s more difficult than one might think and they are starting to do this now.
veroxii@aussie.zone 17 hours ago
They are starting to do this. Most new models support function calling and can generate code to come up with math answers etc
driving_crooner@lemmy.eco.br 18 hours ago
If you pay for chatgpt you can connect it with wolfrenalpha and it’s relays the maths to it
CileTheSane@lemmy.ca 7 hours ago
why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?
Because the AI doesn’t know what it’s being asked, it’s just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.
“Why doesn’t the man in the Chinese room just use a calculator for math questions?”
NobodyElse@sh.itjust.works 21 hours ago
Because the LLMs are now being used to vibe code themselves.
four@lemmy.zip 22 hours ago
I think they’re trying to do that. But AI can still fail at that lol
MajorasMaskForever@lemmy.world 17 hours ago
From a technology standpoint, nothing is stopping them. From a business standpoint: hubris.
To put time and effort into creating traditional logic based algorithms to compensate for this generic math model would be to admit what mathematicians and scientists have known for centuries. That models are good at finding patterns but they do not explain why a relationship exists (if it exists at all). The technology is fundamentally flawed for the use cases that OpenAI is trying to claim it can be used in, and programming around it would be to acknowledge that.
ImplyingImplications@lemmy.ca 18 hours ago
AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.
Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.
There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.