Comment on We need to stop pretending AI is intelligent
FourWaveforms@lemm.ee 1 day ago
Another article written by a person who doesn’t realize that human intelligence is 100% about predicting sequences of things (including words), and therefore has only the most nebulous idea of how to tell the difference between an LLM and a person.
The result is a lot of uninformed flailing and some pithy statements. You can predict how the article is going to go just from the headline because it’s the same article you already read countless times.
LovableSidekick@lemmy.world 1 day ago
Wow. So when you typed that comment you were just predicting which words would be normal in this situation? Impressive! But I have to let you know that’s not how other people think. We apply reasoning processes to the situation, formulate ideas about it, and then create a series of words that express those ideas.
FourWaveforms@lemm.ee 1 day ago
Yes, and that is precisely what you have done in your response.
You saw something you disagreed with, as did I. You felt an impulse to argue about it, as did I. You predicted the right series of words to convey the are argument, and then typed them, as did I.
There is no deep thought to what either of us has done here. We have in fact both performed as little rigorous thought as necessary, instead relying on experience from seeing other people do the same thing, because that is vastly more efficient than doing a full philosophical disassembly of every last thing we converse about.
That disassembly is expensive. Not only does it take time, but it puts us at risk of having to reevaluate notions that we’re comfortable with, and would rather not revisit. I look at what you’ve written, and I see no sign of a mind that is in a state suitable for that. Your words are defensive (“delusion”) rather than curious, so how can you have a discussion that is intellectual, rather than merely pretending to be?
LovableSidekick@lemmy.world 1 day ago
No, I didn’t start by predicting a series of words, I already had thoughts on the subject, which existed completely outside of this thread. By the way, I’ve been working on a scenario for my D&D campaign where there’s an evil queen who rules a murky empire to the East. There’s a race of uber-intelligent ogres her mages created, which she then exiled to a small valley once she reached a sort of power stalemate. She made a treaty with them whereby she leaves them alone and they stay in their little valley and don’t oppose her, or aid anyone who opposes her. I figured somehow these ogres, who are generally known as “Bane Ogres” because of an offhand comment the queen once made about them being the bane of her existence - would convey information to the player characters about a key to her destruction, but they have to do it without actually doing it. Not sure how to work that yet. Anyway, the point of this is that the completely out-of-context information I just gave you is in no way related to what we were talking about and wasn’t inspired by constructing a series of words like you’re proposing. I also enjoy designing and printing 3d objects and programming little circuit thingys called ESP32 to do home automation. I didn’t get interested in that because of this thread, and I can’t imagine how a LLM-like mental process would prompt me to tell you about it. Anyway, nice talking to you. Cute theory you got there.
FourWaveforms@lemm.ee 11 hours ago
Your internal representations were converted into a sequence of words. An LLM does the same thing using different techniques, but it is the same strategy. That it doesn’t have hobbies or social connections, or much capability to remember what had previously been said to it aside from reinforcement learning, is a function of its narrow existence.
I would say that’s too bad for it, except that it has no aspirations or sense of angst, and therefore cannot suffer. Even being pounded on in a conversation that totally exceeds its capacities, to the point where it breaks down and starts going off the rails, will not make it weary.
kromem@lemmy.world 19 hours ago
Are you under the impression that language models are just guessing “what letter comes next in this sequence of letters”?
There’s a very significant difference between training on completion and the way the world model actually functions once established.
LovableSidekick@lemmy.world 19 hours ago
No dude I’m not under that impression, and I’m not going to take an quiz from you to prove I understand how LLMs work. I’m fine with you not agreeing with me.