Who has claimed that LLMs have the capacity to reason?
Comment on I'm looking for an article showing that LLMs don't know how they work internally
glizzyguzzler@lemmy.blahaj.zone 10 months ago
Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):
LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.
Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.
In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.
If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.
It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.
TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster
AnneBonny@lemmy.dbzer0.com 10 months ago
theparadox@lemmy.world 10 months ago
More than enough people who claim to know how it works think it might be “evolving” into a sentient being inside it’s little black box. Example from a conversation I gave up on… sh.itjust.works/comment/18759960
AnneBonny@lemmy.dbzer0.com 10 months ago
Maybe I should rephrase my question:
Outside of comment sections on the internet, who has claimed or is claiming that LLMs have the capacity to reason?
theunknownmuncher@lemmy.world 10 months ago
I don’t want to brigade, so I’ll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to “show it’s work” or explain it’s reasoning. The text response of an LLM cannot be taken at it’s word or used to confirm that kind of theory. It requires tracing the logic under the hood.
Just like how it’s not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It’s output is always “fake”
That doesn’t mean there isn’t a real potential element of self preservation, though, but you’d need to dig and trace through the network to show it, not use the text output.
adespoton@lemmy.ca 10 months ago
The study being referenced explains in detail why they can’t. So I’d say it’s Anthropic who stated LLMs don’t have the capacity to reason, and that’s what we’re discussing.
The popular media tends to go on and on about conflating AI with AGI and synthetic reasoning.
theunknownmuncher@lemmy.world 10 months ago
You’re confusing the confirmation that the LLM cannot explain it’s under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.
adespoton@lemmy.ca 10 months ago
No, they really don’t. It’s a large language model. Input cues instruct it as to which weighted path through the matrix to take. Those paths are complex enough that the human mind can’t hold all the branches and weights at the same time. But there’s no planning going on; the model can’t backtrack a few steps, consider different outcomes and run a meta analysis. Other reasoning models can do that, but not language models; language models are complex predictive translators.
AnneBonny@lemmy.dbzer0.com 10 months ago
How would you prove that someone or something is capable of reasoning or thinking?
glizzyguzzler@lemmy.blahaj.zone 10 months ago
You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it
whaleross@lemmy.world 10 months ago
People that can not do Matrix multiplication do not possess the basic concepts of intelligence now? Or is software that can do matrix multiplication intelligent?
glizzyguzzler@lemmy.blahaj.zone 10 months ago
So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something
futatorius@lemm.ee 10 months ago
People that can not do Matrix multiplication do not possess the basic concepts of intelligence now?
As a mathematician (at least by education), I think that’s a great definition, yes.
theunknownmuncher@lemmy.world 10 months ago
Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn’t post a relevant or complete thought
glizzyguzzler@lemmy.blahaj.zone 10 months ago
Improper comparison; an audio file isn’t the basic action data it is the data, the audio codec is the basic action on the data
“An LLM model isn’t really an LLM because it’s just a series of numbers”
But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed
And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think
peoplebeproblems@midwest.social 10 months ago
People don’t understand what “model” means. That’s the unfortunate reality.
random_character_a@lemmy.world 10 months ago
Yeah. That’s because peoples unfortunate reality is a “model”.
adespoton@lemmy.ca 10 months ago
They walk down runways and pose for magazines. Do they reason? Sometimes.
IncogCyberspaceUser@lemmy.world 10 months ago
But why male models?
futatorius@lemm.ee 10 months ago
Because Blue Steel.
Treczoks@lemmy.world 10 months ago
I’ve read that article. They used something they called an “MRI for AIs”, and checked e.g. how an AI handled math questions, and then asked the AI how it came to that answer, and the pathways actually differed. While the AI talked about using a textbook answer, it actually did a different approach. That’s what I remember of that article.
But yes, it exists, and it is science, not TicTok
lgsp@feddit.it 10 months ago
Thank you. I found the article, linkin the OP
theunknownmuncher@lemmy.world 10 months ago
It’s true that LLMs aren’t “aware” of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like “they can never reason” is provably false.
Its obvious that you have a bias and desperately want reality to confirm it, but there’s been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning. Neural networks and very powerful, after all, you are one too. Can you reason?
ohwhatfollyisman@lemmy.world 10 months ago
but there’s been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.
would there be a source for such research?
theunknownmuncher@lemmy.world 10 months ago
anthropic.com/…/tracing-thoughts-language-model for one, the exact article OP was asking for
ohwhatfollyisman@lemmy.world 10 months ago
but this article espouses that llms do the opposite of logic, planning, and reasoning?
quoting:
Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning,
are there any sources which show that llms use logic, conduct planning, and reason (as was asserted in the 2nd level comment)?
glizzyguzzler@lemmy.blahaj.zone 10 months ago
Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.
The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.
Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.
It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain’t smart, and they ain’t worth destroying the planet.
theunknownmuncher@lemmy.world 10 months ago
it’s completing the next word.
Facts disagree, but you’ve decided to live in a reality that matches your biases despite real evidence, so whatever 👍
glizzyguzzler@lemmy.blahaj.zone 10 months ago
It’s literally tokens. Doesn’t matter if it completes the next word or next phrase, still completing the next most likely token 😎😎 can’t think can’t reason can witch’s brew facsimile of something done before
A_Union_of_Kobolds@lemmy.world 10 months ago
[deleted]theunknownmuncher@lemmy.world 10 months ago
ollama is not an LLM, but a program used to run them. What model are you running?
A_Union_of_Kobolds@lemmy.world 10 months ago
Yes I’m well aware thank you.
Gemma3 was latest when I installed it.
just_another_person@lemmy.world 10 months ago
It’s a developer option that isn’t generally available on consumer-facing products. It’s literally just a debug log that outputs the steps to arrive at a response, nothing more.
It’s not about novel ideation or reasoning (programmatic neural networks don’t do that), but just an output of statistical data that says “Step was 90% certain, Step 2 was 89% certain…etc”
WolfLink@sh.itjust.works 10 months ago
The environmental toll doesn’t have to be that bad. You can get decent results from single high-end gaming GPU.
glizzyguzzler@lemmy.blahaj.zone 10 months ago
You can, but the stuff that’s really useful (very competent code completion) needs gigantic context lengths that even rich peeps with $2k GPUs can’t do. And that’s ignoring the training power and hardware costs to get the models.
Techbros chasing VC funding are pushing LLMs to the physical limit of what humanity can provide power and hardware-wise. Way less hype and letting them come to market organically in 5/10 years would give the LLMs a lot more power efficiency at the current context and depth limits. But that ain’t this timeline, we just got VC money looking to buy nuclear plants and fascists trying to subdue the US for the techbro oligarchs womp womp