That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO
It really doesn’t. You’re just describing the “fancy” part of “fancy autocomplete.” No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.
What’s being conveyed by “fancy autocomplete” is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more “creative” (meaning more random, less probable) outputs. They do not actually “think” as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It’s not actually applying a structured, logical method the way humans can be taught to.
It also doesn’t help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. “thinks” in “conceptual spaces” is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.
On this point I can highly recommend this open access and even language-wise accessible article: link.springer.com/article/…/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)
People are generally shit at understanding probabilities and even when they have a fairly strong math background tend to explain probablistic outcomes through anthropomorphism rather than doing the more difficult and “think-painy” statistical analysis that would be required to know if there was anything more to it.
I myself start to have thoughts that balatro is purposefully screwing me over or feeding me outcomes when it’s just randomness and probability as stated.
Ultimately, it’s easier (and more fun) for us to reason that way and it largely serves us better in everyday life.
But these things are entire casinos’ worth of probability and statistics in and of themselves, and the people developing them want desperately to believe that they are something more than pseudorandom probabilistic fancy autocomplete engines.
Add the difficulty of getting someone to understand how something works when their salary depends on them not understanding it to the existing inability of humans to reason probabilistically and the AGI from LLM delusion becomes near impossible to shake for some folks.
I wouldn’t be surprised if this AI hype bubble yields a cult in the end.
Genuine question regarding the rhyme thing, it can be argued that “predicting backwards isn’t very different” but you can’t attribute generating the rhyme first to noise, right? So how does it “know” (for lack of a better word) to generate the rhyme first?
It already knows which words are, statistically, more commonly rhymed with each other. From the massive list of training poems. This is what the massive data sets are for. One of the interesting things is that it’s not predicting backwards, exactly. It’s actually mathematically converging on the response text to the prompt, all the words at the same time.
Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Interesting that…
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.
Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.
Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?
Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)
But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking
It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete
anything that claims it “thinks” on any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way “thinking” as a sentient mind does.
You know they don’t think - even though “It’s a peculiar truth that we don’t understand how large language models (LLMs) actually work.”?
It’s truly shocking to read this from a mess of connected neurons and synapses like yourself. You’re simply doing fancy word prediction of the next word /s
I wish I could find the article. It was researchers and they were freaked out just as much as anyone else. It’s like slightly over chance that it “thought,” not some huge revolutionary leap.
Anybody who claims they don’t “think” before we even figure out completely how they work and even how human thoughts work are just spreading anti-AI sentiment beyond what is considered logical.
You should become a better example than an AI by only arguing based on facts rather than things you hallucinate if you want to prove your own position on this matter.
I mean it implies that they CAN start with the conclusion or the “thought” and then generate the text to verbalize that.
It’s shocking to what length humans will go to explain how their wetware neural network is fundamentally different and it’s impossible for LLMs to think or reason in any way. Honestly LLMs teach us more about human intelligence (or the lack thereof) than machine intelligence. Like obi wan said, “The ability to speak does not make one intelligent” haha.
cm0002@lemmy.world 5 days ago
That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO
Voroxpete@sh.itjust.works 5 days ago
It really doesn’t. You’re just describing the “fancy” part of “fancy autocomplete.” No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.
What’s being conveyed by “fancy autocomplete” is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more “creative” (meaning more random, less probable) outputs. They do not actually “think” as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It’s not actually applying a structured, logical method the way humans can be taught to.
FourWaveforms@lemm.ee 4 days ago
Unfortunately, these articles are often written by people who don’t know enough to realize they’re missing important nuances.
datalowe@lemmy.world 4 days ago
It also doesn’t help that the AI companies deliberately use language to make their models seem more human-like and cogent. Saying that the model e.g. “thinks” in “conceptual spaces” is misleading imo. It abuses our innate tendency to anthropomorphize, which I guess is very fitting for a company with that name.
On this point I can highly recommend this open access and even language-wise accessible article: link.springer.com/article/…/s10676-024-09775-5 (the authors also appear on an episode of the Better Offline podcast)
aesthelete@lemmy.world 4 days ago
People are generally shit at understanding probabilities and even when they have a fairly strong math background tend to explain probablistic outcomes through anthropomorphism rather than doing the more difficult and “think-painy” statistical analysis that would be required to know if there was anything more to it.
I myself start to have thoughts that balatro is purposefully screwing me over or feeding me outcomes when it’s just randomness and probability as stated.
Ultimately, it’s easier (and more fun) for us to reason that way and it largely serves us better in everyday life.
But these things are entire casinos’ worth of probability and statistics in and of themselves, and the people developing them want desperately to believe that they are something more than pseudorandom probabilistic fancy autocomplete engines.
Add the difficulty of getting someone to understand how something works when their salary depends on them not understanding it to the existing inability of humans to reason probabilistically and the AGI from LLM delusion becomes near impossible to shake for some folks.
I wouldn’t be surprised if this AI hype bubble yields a cult in the end.
reev@sh.itjust.works 4 days ago
Genuine question regarding the rhyme thing, it can be argued that “predicting backwards isn’t very different” but you can’t attribute generating the rhyme first to noise, right? So how does it “know” (for lack of a better word) to generate the rhyme first?
dustyData@lemmy.world 4 days ago
It already knows which words are, statistically, more commonly rhymed with each other. From the massive list of training poems. This is what the massive data sets are for. One of the interesting things is that it’s not predicting backwards, exactly. It’s actually mathematically converging on the response text to the prompt, all the words at the same time.
Carrolade@lemmy.world 5 days ago
Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Womble@lemmy.world 5 days ago
Interesting that…
Carrolade@lemmy.world 4 days ago
Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.
Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.
Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?
MTK@lemmy.world 4 days ago
Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)
But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking
TimewornTraveler@lemm.ee 3 days ago
wow an AI researcher over hyping his own product. he’s just waxing poetic .
we don’t even have a good sense of what thought IS, please tell Claude to call the philosophers because apparently he’s figured out consciousness
Shanmugha@lemmy.world 4 days ago
It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete
LarmyOfLone@lemm.ee 4 days ago
But how is this different from your average redditor?
Shanmugha@lemmy.world 4 days ago
Redditor as “a person active on Reddit”? I don’t see where I was talking about humans. Or am I misunderstanding the question?
pelespirit@sh.itjust.works 5 days ago
I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.
FunnyUsername@lemmy.world 5 days ago
anything that claims it “thinks” on any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way “thinking” as a sentient mind does.
LarmyOfLone@lemm.ee 4 days ago
You know they don’t think - even though “It’s a peculiar truth that we don’t understand how large language models (LLMs) actually work.”?
It’s truly shocking to read this from a mess of connected neurons and synapses like yourself. You’re simply doing fancy word prediction of the next word /s
pelespirit@sh.itjust.works 5 days ago
I wish I could find the article. It was researchers and they were freaked out just as much as anyone else. It’s like slightly over chance that it “thought,” not some huge revolutionary leap.
stephen01king@lemmy.zip 4 days ago
Anybody who claims they don’t “think” before we even figure out completely how they work and even how human thoughts work are just spreading anti-AI sentiment beyond what is considered logical.
You should become a better example than an AI by only arguing based on facts rather than things you hallucinate if you want to prove your own position on this matter.
LarmyOfLone@lemm.ee 4 days ago
I mean it implies that they CAN start with the conclusion or the “thought” and then generate the text to verbalize that.
It’s shocking to what length humans will go to explain how their wetware neural network is fundamentally different and it’s impossible for LLMs to think or reason in any way. Honestly LLMs teach us more about human intelligence (or the lack thereof) than machine intelligence. Like obi wan said, “The ability to speak does not make one intelligent” haha.