The study being referenced explains in detail why they can’t. So I’d say it’s Anthropic who stated LLMs don’t have the capacity to reason, and that’s what we’re discussing.
The popular media tends to go on and on about conflating AI with AGI and synthetic reasoning.
theparadox@lemmy.world 4 days ago
More than enough people who claim to know how it works think it might be “evolving” into a sentient being inside it’s little black box. Example from a conversation I gave up on… sh.itjust.works/comment/18759960
theunknownmuncher@lemmy.world 4 days ago
I don’t want to brigade, so I’ll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to “show it’s work” or explain it’s reasoning. The text response of an LLM cannot be taken at it’s word or used to confirm that kind of theory. It requires tracing the logic under the hood.
Just like how it’s not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It’s output is always “fake”
That doesn’t mean there isn’t a real potential element of self preservation, though, but you’d need to dig and trace through the network to show it, not use the text output.
AnneBonny@lemmy.dbzer0.com 4 days ago
Maybe I should rephrase my question:
Outside of comment sections on the internet, who has claimed or is claiming that LLMs have the capacity to reason?