Comment on Something Bizarre Is Happening to People Who Use ChatGPT a Lot
Zip2@feddit.uk 1 week ago
I couldn’t be bothered to read it, so I got CharGPT to summarise it. Apparently there’s nothing to worry about.
Comment on Something Bizarre Is Happening to People Who Use ChatGPT a Lot
Zip2@feddit.uk 1 week ago
I couldn’t be bothered to read it, so I got CharGPT to summarise it. Apparently there’s nothing to worry about.
interdimensionalmeme@lemmy.ml 1 week ago
You didn’t, I did
The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed “power users” — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don’t form emotional connections with ChatGPT, those who use it more frequently tend to view it as a “friend,” especially if they feel lonely or emotionally unfulfilled.
Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.
You said:
Make a list of the main points and for each explain why it will be worse when it is society wide
ChatGPT said:
Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:
Let me know if you want this formatted for a presentation, policy paper, or discussion.
FiskFisk33@startrek.website 1 week ago
What’s the point of a summary that’s longer than the article itself?
interdimensionalmeme@lemmy.ml 1 week ago
Are you suggesting I did something other than I said I did?
How dare you sir!
liv@lemmy.nz 5 days ago
That’s really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.
Namely, the increased power of AI’s big tech company owners over people that emotional dependence creates.
It’s not as if these concepts aren’t widely discussed online, everything from Meta’s emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.
interdimensionalmeme@lemmy.ml 5 days ago
It’s the 4th point
liv@lemmy.nz 5 days ago
Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-“lobotomized”.
interdimensionalmeme@lemmy.ml 1 week ago
You said: Take the same list again, for each point explain why it’s actually beneficial and how it will get better with increased adoption
ChatGPT said:
Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:
Let me know if you’d like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.
You said: Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid
ChatGPT said:
Alright, here’s the breakdown of that critique — the “stochastic parrot” idea and the comparison to advanced autocomplete — explained clearly and fairly:
The term “stochastic parrot” was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:
So, calling ChatGPT a stochastic parrot is a way of saying: It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.
By design, ChatGPT:
Because it’s not connected to lived experience, emotion, consciousness, or intent, it:
It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.
If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you…” and it might suggest “…have a great day.”
ChatGPT is essentially the same mechanism:
So e:
In summary:
Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:
It’s powerful and useful — but it’s important not to mistake fluency for understanding.
Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.