Yep, this is a very good explanation. Seeing ChatGPT “talk” is immediately associated with sentience, because for your entire life, and millions of years of evolution, apeech was in 99.9% of cases, a sign of aentience. So your brain doesn’t even consider it a question, until you consciously stop to think about it.
An interesting way to antromorphizise GPT that’s still technically correct is to think of it as having essentially perfect memory. So it doesn’t know how to talk, but it has seen so many conversations (literal trillions) that it can recognize the patterns that make up speech and simply “remember” what the most likely combination of words is, given the context, with zero actual “understanding” of language. (Human trainers then fine-tune these guesses to give you the ChatGPT experience)
Azzu@lemm.ee 8 months ago
To be fair/to elaborate on that point, your dog is much much closer to human than chatgpt is, we share like 84% of DNA. Most of the same basic emotions like hunger, fear, desire, etc are present as well as the ability to learn and communicate.
Your dog may not be “scheming”, because it lacks the ability to plan very far in the future, but it definitely has the intention of getting your attention and tries to figure out in the moment how to do it. Same as a human kid might do.
It is incredibly valuable to act like a dog is human, because dogs do actually share a lot of characteristics. Not all of course, it’s still wrong to fully assume a dog is human, but as a quick heuristic it’s still valuable a lot (84%? :D) of the time.
huginn@feddit.it 8 months ago
Sure: I get that they’re not exactly the same. The ChatGPT issue is orders of magnitude more removed from humanity than a dog, but it’s a daily example of anthropomorphic bias that is relatable and easy to understand. Just was using it as an example.
nutsack@lemmy.world 8 months ago
when the chat bot starts using my DNA I’m killing it