It’s not literally guessing, because guessing implies it understands there’s a question and is trying to answer that question. It’s not even doing that. It’s just generating words that you could expect to find nearby.
DarrinBrunner@lemmy.world 3 days ago
I think it’s worse when they get it right only some of the time. It’s not a matter of opinion, it should not change its “mind”.
The fucking things are useless for that reason, they’re all just guessing, literally.
merc@sh.itjust.works 2 days ago
HugeNerd@lemmy.ca 3 days ago
they’re all just guessing, literally
They’re literally not.
m0darn@lemmy.ca 3 days ago
Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?
Iconoclast@feddit.uk 2 days ago
It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.
It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands a anything it’s saying.
So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.
vii@lemmy.ml 2 days ago
It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.
I know some humans that applies to
SuspciousCarrot78@lemmy.world 2 days ago
A fair point but often over stated, I feel. And it overlooks something -
Language itself encodes meaning. If you can statistically predict the next word, then you are implicitly modeling the structure of ideas, relationships, and concepts carried by that language.
You don’t get coherence, useful reasoning, or consistently relevant answers from pure noise. The patterns reflect real regularities in the world, distilled through human communication.
Yes, that doesn’t mean an LLM “understands” in the human sense, or that it’s infallible.
But reducing it to “just autocomplete” misses the fact that sufficiently rich pattern modeling can approximate aspects of reasoning, abstraction, and knowledge use in ways that are practically meaningful, even if the underlying mechanism is different from human thought.
TL;DR: it’s a bit more than just a fancy spell check. ICBW and YMMV
KeenFlame@feddit.nu 2 days ago
Yes it guesstimates what is wrong with you to argue like that about semantics?
vii@lemmy.ml 2 days ago
This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.
HugeNerd@lemmy.ca 2 days ago
In people, even animals. In a pile of disorganized bits and bytes in a piece of crap? No.
Tetragrade@leminal.space 3 days ago
Same takeaway as the article (everyone read the article, right?).
You should think about this yourself, can you recall instances when you were aaked the same question at different points in time? How did you respond?
CileTheSane@lemmy.ca 2 days ago
Having read the article (you read the article right?) what gave you the impression the AI was asked the question at different points in time?
Tetragrade@leminal.space 2 days ago
The AI was asked the same question repeatedly and gave different answers, due to its randomised structure.
People will also often do this, but because our actions are strongly influenced by time-dependent stuff (like sense perception and short-term memory contents), you need to ask at different times.
CileTheSane@lemmy.ca 2 days ago
My answer to this question will not change if you ask me a year from now, because as OP said this is not a matter of opinion; there is a factually correct answer.
XLE@piefed.social 2 days ago
Even if you retooled the LLM to not randomize the output it generates, it can still create contradictory outputs based on a slightly reworded question. I’m talking about a misspelling, different punctuation, things that simply wouldn’t cause a person to change their answer.
(And that’s assuming the LLM just got started from scratch. If you had any previous conversation with it, it could have influenced the output as well. It’s such a mess.)
Iconoclast@feddit.uk 2 days ago
Is cruise control useless because it doesn’t drive you to the grocery store? No. It’s not supposed to. It’s designed to maintain a steady speed - not to steer.
Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They’re not useless - we’re just using them off-label and then complaining when they fail at something they were never built to do.
Urist@leminal.space 2 days ago
Language without meaning is garbage. Like, literal garbage, useful for nothing. Language is a tool used to express ideas, if there are no ideas being expressed then it’s just a combination of letters.
Which is exactly why LLMs are useless.
Iconoclast@feddit.uk 2 days ago
800 million weekly ChatGPT users disagree with that.
RichardDegenne@lemmy.zip 2 days ago
And there are 1.3 billion smokers in the world according to the WHO.
Does that make cigarettes useful?
Urist@leminal.space 2 days ago
Those users are being harmed by it, not benefited. That isn’t useful, it’s a social disease.
tigeruppercut@lemmy.zip 2 days ago
But natural language in service of what? If they can’t produce answers that are correct, what’s the point of using them? I can get wrong answers anywhere.
Threeme2189@sh.itjust.works 2 days ago
As OP said, LLMs are really good at generating text that is fluid and looks natural to us. So if you want that kind of output, LLMs are the way to go.
Not all LLM prompts ask factual questions and not all of the generated answers need to be correct.
Are poems, songs, stories or movie scripts ‘correct’?
I’m totally against shoving LLMs everywhere, but they do have their uses. They are really good at this one thing.
tigeruppercut@lemmy.zip 2 days ago
It’s a valid point that they can produce natural language. The Turing Test has been a thing for awhile after all. But while the language sounds natural, can they create anything of value? Are the poems or stories they make worth anything? It’s not like humans don’t create shitty art, so I guess generating random soulless crap is similar to that.
The value of language produced by something that can’t understand the reason for language is an interesting question I suppose.
Iconoclast@feddit.uk 2 days ago
I’m not here defending the practical value of these models. I’m just explaining what they are and what they’re not.
XLE@piefed.social 2 days ago
You’re definitely running around Lemmy defending AI, Iconoclast… Might as well be honest about it
iopq@lemmy.world 2 days ago
Some of them can produce the correct answer. Of we do the test next year and they do better than humans then, isn’t it progress?