Yep. To add on, this is exactly what all the “AI haters” (myself included) are going on about when they say stuff like there isn’t any logic or understanding behind LLMs, or when they say they are stochastic parrots.
LLMs are incredibly good at generating text that works grammatically and reads like it was put together by someone knowledgable and confident, but they have no concept of “truth” or reality. They just have a ton of absurdly complicated technical data about how words/phrases/sentences are related to each other on a structural basis. It’s all just really complicated math about how text is put together. It’s absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.
Turns out that if you get enough of that data together, it makes a very convincing appearance of logic and reason. But it’s only an appearance.
You can’t duct tape enough speak and spells together to rival the mass of the Sun and have it somehow just become something that outputs a believable human voice.
For an incredibly long time, ChatGPT would fail questions along the lines of “What’s heavier, a pound of feathers or three pounds of steel?” because it had seen the normal variation of the riddle with equal weights so many times. It has no concept of one being smaller than three. It just “knows” the pattern of the “correct” response.
It no longer fails that “trick”, but there’s significant evidence that OpenAI has set up custom handling for that riddle over top of the actual LLM, as it doesn’t take much work to find similar ways to trip it up by using slightly modified versions of classic riddles.
A lot of supporters will counter “Well I just ask it to tell the truth, or tell it that it’s wrong, and it corrects itself”, but I’ve seen plenty of anecdotes in the opposite direction, with ChatGPT insisting that it’s hallucination was fact. It doesn’t have any concept of true or false.
neatchee@lemmy.world 7 months ago
The shame of it is that despite this limitation LLMs have very real practical uses that, much like cryptocurrencies and NFTs did to blockchain, are are being undercut by hucksters.
Tesla has done the range thing with autonomous driving too. They claimed to be something they’re not (fanboys don’t @ me about semantics) and made the REAL thing less trusted and take even longer to come to market.
Drives me crazy.
FlashMobOfOne@lemmy.world 7 months ago
Yup, and I hate that.
I really would like to one day just take road trips everywhere without having to actually drive.
neatchee@lemmy.world 7 months ago
Right? Waymo is already several times safer than humans and tesla’s garbage and municipalities keep refusing them. Trust is a huge problem for them.
And yes, haters, I know that they still have problems in inclement weather but that’s kinda the point: we would be much further along if it weren’t for the unreasonable hurdles they keep facing because of fear created by Tesla
FlashMobOfOne@lemmy.world 7 months ago
Hadn’t heard of this. Thanks!
yessikg@lemmy.blahaj.zone 7 months ago
Trains are really good for that
FlashMobOfOne@lemmy.world 7 months ago
You can’t road trip in a train.
humorlessrepost@lemmy.world 7 months ago
For road trips (i.e. interstates and divided highways), GM’s Super Cruise is pretty much there unless you go through a construction zone.
FlashMobOfOne@lemmy.world 7 months ago
I’ll look into that when my Kia passes away. Thank you!