I don’t think ai is being marketed as awesome at everything. It’s got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It’s a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.
AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.
Something marketed as AGI should be treated as AGI when proving it isn’t AGI.
PixelatedSaturn@lemmy.world 2 days ago
vinnymac@lemmy.world 2 days ago
What the tech is being marketed as and what it’s capable of are not the same, and likely never will be. In fact all things are very rarely marketed how they truly behave, intentionally.
Everyone is still trying to figure out what these Large Reasoning Models and Large Language Models are even capable of; Apple, one of the largest companies in the world just released a white paper this past week describing the “illusion of reasoning”. If it takes a scientific paper to understand what these models are and are not capable of, I assure you they’ll be selling snake oil for years after we fully understand every nuance of their capabilities.
TL;DR Rich folks want them to be everything, so they’ll be sold as capable of everything until we repeatedly refute they are able to do so.
PixelatedSaturn@lemmy.world 2 days ago
I think in many cases people intentionally or unintentionally disregard the time component here. Ai is in development. I think what is being marketed here, just like in the stock market, is a piece of the future. I don’t expect the models I use to be perfect and not make mistakes, so I use them accordingly. They are useful for what I use them for and I wouldn’t use them for chess. I don’t expect that laundry detergent to be just as perfect in the commercial either.
BassTurd@lemmy.world 2 days ago
Marketing does not mean functionality. AI is absolutely being sold to the public and enterprises as something that can solve everything. Obviously it can’t, but it’s being sold that way. I would bet the average person would be surprised by this headline solely on what they’ve heard about the capabilities of AI.
PixelatedSaturn@lemmy.world 2 days ago
I don’t think anyone is so stupid to believe current ai can solve everything.
And honestly, I didn’t see any marketing material that would claim that.
BassTurd@lemmy.world 2 days ago
You are both completely over estimating the intelligence level of “anyone” and not living in the same AI marketed universe as the rest of us. People are stupid. Really stupid.
petrol_sniff_king@lemmy.blahaj.zone 2 days ago
The Zoom CEO, that is the video calling software, wanted to train AIs on your work emails and chat messages to create AI personalities you could send to the meetings you’re paid to sit through while you drink Corona on the beach and receive a “summary” later.
The Zoom CEO, that is the video calling software, seems like a pretty stupid guy?
Yeah. Yeah, he really does. Really… fuckin’… dumb.
4am@lemm.ee 2 days ago
Really then why are they cramming AI into every app and every device and replacing jobs with it and claiming they’re saving so much time and money and they’re the best now the hardest working most efficient company and this is the future and they have a director of AI vision that’s right a director of AI vision a true visionary to lead us into the promised land where we will make money automatically please bro just let this be the automatic money cheat oh god I’m about to
PixelatedSaturn@lemmy.world 2 days ago
Those are two different things.
-
they are craming ai everywhere because nobody wants to miss the boat and because it plays well in the stock market.
-
the people claiming it’s awesome and that they are doing I don’t know what with it, replacing people are mostly influencers and a few deluded people.
Ai can help people in many different roles today, so it makes sense to use it. Even in roles that is not particularly useful, it makes sense to prepare for when it is.
petrol_sniff_king@lemmy.blahaj.zone 2 days ago
it makes sense to prepare for when it is.
Pfft, okay.
-
pelespirit@sh.itjust.works 2 days ago
Not to help the AI companies, but why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff? It’s obvious they’re shit at it, why do they answer anyway? It’s because they’re programmed by know-it-all programmers, isn’t it.
rebelsimile@sh.itjust.works 2 days ago
Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)
MrJgyFly@lemmy.world 2 days ago
Or they keep telling you that you just have to wait it out. It’s going to get better and better!
ImplyingImplications@lemmy.ca 2 days ago
AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.
Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.
There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.
PixelatedSaturn@lemmy.world 2 days ago
…or a simple counter to count the r in strawberry. Because that’s more difficult than one might think and they are starting to do this now.
NobodyElse@sh.itjust.works 2 days ago
Because the LLMs are now being used to vibe code themselves.
fmstrat@lemmy.nowsci.com 1 day ago
This is where MCP comes in. It’s a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.
veroxii@aussie.zone 2 days ago
They are starting to do this. Most new models support function calling and can generate code to come up with math answers etc
driving_crooner@lemmy.eco.br 2 days ago
If you pay for chatgpt you can connect it with wolfrenalpha and it’s relays the maths to it
four@lemmy.zip 2 days ago
I think they’re trying to do that. But AI can still fail at that lol
CileTheSane@lemmy.ca 1 day ago
Because the AI doesn’t know what it’s being asked, it’s just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.
“Why doesn’t the man in the Chinese room just use a calculator for math questions?”
MajorasMaskForever@lemmy.world 2 days ago
From a technology standpoint, nothing is stopping them. From a business standpoint: hubris.
To put time and effort into creating traditional logic based algorithms to compensate for this generic math model would be to admit what mathematicians and scientists have known for centuries. That models are good at finding patterns but they do not explain why a relationship exists (if it exists at all). The technology is fundamentally flawed for the use cases that OpenAI is trying to claim it can be used in, and programming around it would be to acknowledge that.