Youâve missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that thereâs a person there is irrelevant, and they could be replaced with a speaker or computer terminal.
Put differently, itâs not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.
If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely canât be very good at anything are not comforting to me at all.
kassiopaea@lemmy.blahaj.zone â¨1⊠â¨day⊠ago
This. I often see people shitting on AI as âfancy autocompleteâ or joking about how they get basic things incorrect like this post but completely discount how incredibly fucking capable they are in every domain that actually matters. Thatâs what we should be worried about⌠what does it matter that it doesnât âwork the sameâ if it still accomplishes the vast majority of the same things? The fact that we can get something that even approximates logic and reasoning ability from a deterministic system is terrifying on implications alone.
Knock_Knock_Lemmy_In@lemmy.world â¨1⊠â¨day⊠ago
Why doesnât the LLM know to write (and run) a program to calculate the number of characters?
I feel like Iâm missing something fundamental.
OsrsNeedsF2P@lemmy.ml â¨1⊠â¨day⊠ago
You didnât get good answers so Iâll explain.
First, an LLM can easily write a program to calculate the number of
r
s. If you ask an LLM to do this, you will get the code back.But the website ChatGPT.com has no way of executing this code, even if it was generated.
The second explanation is how LLMs work. They work on the word (technically token, but think word) level. They donât see letters. The AI behind it literally can only see words. The way it generates output is it starts typing words, and then guesses what word is most likely to come next. So it literally does not know how many
r
s are in strawberry. The impressive part is how good this âguessing what word comes nextâ is at answering more complex questions.Knock_Knock_Lemmy_In@lemmy.world â¨23⊠â¨hours⊠ago
But why canât âquery the python terminalâ be trained into the LLM. It just needs some UI training.
outhouseperilous@lemmy.dbzer0.com â¨1⊠â¨day⊠ago
It doesnât know things.
Itâs a statistical model. It cannot synthesize information ir problem solve, only show you a rough average of its library if inputs graphed by proximity to your input.
jsomae@lemmy.ml â¨1⊠â¨day⊠ago
Congrats, youâve discovered reductionism. The human brain also doesnât know things, as itâs composed of electrical synapses made of molecules that obey the laws of physics and direct oneâs mouth to make words in response to signals that come from the ears.
Not saying LLMs donât know things, but your argument as to why they donât know things has no merit.
jsomae@lemmy.ml â¨1⊠â¨day⊠ago
The LLM isnât aware of its own limitations in this regard. The specific problem of getting an LLM to know what characters a token comprises has not been the focus of training. Itâs a totally different kind of error than other hallucinations, itâs almost entirely orthogonal, but other hallucinations are much more important to solve, whereas being able to count the number of letters in a word or add numbers together is not very important, since as you point out, there are already programs that can do that.
outhouseperilous@lemmy.dbzer0.com â¨1⊠â¨day⊠ago
The most convincing arguments that llms are like humans arenât that llmâs are good, but that humans are just unrefrigerated meat and personhood is a delusion.