Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

⁨0⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨return2ozma@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://futurism.com/the-byte/chatgpt-dependence-addiction

source

Comments

Sort:hotnewtop
  • itsonlygeorge@reddthat.com ⁨1⁩ ⁨year⁩ ago

    Isn’t the movie ‘Her’ based on this premise?

    source
    • Zron@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Yes, but what this movie failed to anticipate was the visceral anger I feel when I hear that stupid AI generated voice. I’ve seen too many fake videos or straight up scams using it that I now instinctively mistrust any voice that sounds like male or femaleAI.wav.

      Could never fall in love with AI voice, would always assume it was sent to steal my data so some kid can steal my identify.

      source
      • TheBat@lemmy.world ⁨1⁩ ⁨year⁩ ago

        The movie doesn’t have AI generated voice though. That was Scarlett Johansson.


        "ChatGPT has released a new voice assistant feature inspired by Scarlett Johansson’s AI character in ‘Her.’ Which I’ve never bothered to watch, because without that body, what’s the point of listening?”

        Scarlett’s husband on SNL Weekend Update.

        source
      • Lazhward@lemmy.world ⁨1⁩ ⁨year⁩ ago

        I thought the voice in Her was customized to individual preference. Which I know is hardly relevant.

        source
        • -> View More Comments
  • KingThrillgore@lemmy.ml ⁨1⁩ ⁨year⁩ ago

    Image

    source
    • jade52@lemmy.ca ⁨1⁩ ⁨year⁩ ago

      What the fuck is vibe coding… Whatever it is I hate it already.

      source
      • KingThrillgore@lemmy.ml ⁨1⁩ ⁨year⁩ ago

        Its when you give the wheel to someone less qualified than Jesus: Generative AI

        source
      • NostraDavid@programming.dev ⁨1⁩ ⁨year⁩ ago

        Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup “Eureka Labs - we are building a new kind of school that is AI native”) make a tweet defining the term:

        There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

        People ignore the “It’s not too bad for throwaway weekend projects”, and try to use this style of coding to create “production-grade” code… Lets just say it’s not going well.

        source (xcancel link)

        source
        • -> View More Comments
      • Cgers@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

        Using AI to hack together code without truly understanding what your doing

        source
    • Mechaguana@programming.dev ⁨1⁩ ⁨year⁩ ago

      Hung

      source
      • Dunbar@lemm.ee ⁨1⁩ ⁨year⁩ ago

        I know I am but what are you?

        source
      • YungOnions@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Hanged www.merriam-webster.com/grammar/hung-or-hanged

        source
        • -> View More Comments
  • Blazingtransfem98@discuss.online ⁨1⁩ ⁨year⁩ ago

    I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

    source
    • Saleh@feddit.org ⁨1⁩ ⁨year⁩ ago

      When your job is to shovel out garbage, because that is specifically required from you and not shoveling out garbage is causing you trouble, then you are more than reasonable to let the machine take care of it for you.

      source
      • Blazingtransfem98@discuss.online ⁨1⁩ ⁨year⁩ ago

        K have fun with your AI brain rot.

        source
  • CarbonatedPastaSauce@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Correlation does not equal causation.

    You have to be a little off to WANT to interact with ChatGPT that much in the first place.

    source
    • echodot@feddit.uk ⁨1⁩ ⁨year⁩ ago

      I don’t understand what people even use it for.

      source
      • bilb@lem.monster ⁨1⁩ ⁨year⁩ ago

        I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn’t like something I do, too bad. The genius AI knows better, and I only care about what it has to say.

        source
      • OhVenus_Baby@lemmy.ml ⁨1⁩ ⁨year⁩ ago

        Compiling medical documents into one, any thing of that sort, summarizing, compiling, coding issues, it saves a wild amounts of time compiling lab results that a human could do but it would take multitudes longer.

        Definitely needs to be cross referenced and fact checked as the image processing or general response aren’t always perfect. It’ll get you 80 to 90 percent of the way there. For me it falls under the solve 20 percent of the problem gets you 80 percent to your goal. It needs a shitload more refinement. It’s a start, and it hasn’t been a straight progress path as nothing is.

        source
      • Croquette@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        I use it to generate a little function in a programming language I don’t know so that I can kickstart what I need to look for.

        source
      • Cracks_InTheWalls@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        There’s a few people I know who use it for boilerplate templates for certain documents, who then of course go through it with a fine toothed comb to add relevant context and fix obvious nonsense.

        I can only imagine there are others who aren’t as stringent with the output.

        Heck, my primary use for a bit was custom text adventure games, but ChatGPT’s has a few weaknesses in that department (very, very conflict adverse for beating up bad guys, etc.). There’s probably ways to prompt engineer around these limitations, but a) there’s other, better suited AI tools for this use case, b) text adventure was a prolific genre for a bit, and a huge chunk made by actual humans can be found here - ifdb.org, c) real, actual humans still make them (if a little artsier and moody than I’d like most of the time), so eventually I stopped.

        source
      • tias@discuss.tchncs.de ⁨1⁩ ⁨year⁩ ago

        I use it many times a day for coding and solving technical issues. But I don’t recognize what the article talks about at all. There’s nothing affective about my conversations, other than the fact that using typical human responses (like “thank you”) seems to increase the chances of good responses. Which is not surprising since it matches the patterns that you want to evoke in the training data better.

        source
  • EaterOfLentils@lemmy.world ⁨1⁩ ⁨year⁩ ago
    [deleted]
    source
    • postmateDumbass@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Bath Salts GPT

      source
  • Siegfried@lemmy.world ⁨1⁩ ⁨year⁩ ago

    There is something I don’t understand… openAI collaborates in research that probes how awful its own product is?

    source
    • addie@feddit.uk ⁨1⁩ ⁨year⁩ ago

      If I believed that they were sincerely interested in trying to improve their product, then that would make sense. You can only improve yourself if you understand how your failings affect others.

      I suspect however that Saltman will use it to come up with some superficial bullshit about how their new 6.x model now has a 90% reduction in addiction rates; you can’t measure anything, it’s more about the feel, and that’s why it costs twice as much as any other model.

      source
  • kibiz0r@midwest.social ⁨1⁩ ⁨year⁩ ago

    those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.

    That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.

    Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.

    source
    • RaoulDook@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Imagine discussing your emotions with a computer, LOL. Nerds!

      source
    • Siegfried@lemmy.world ⁨1⁩ ⁨year⁩ ago

      AI and ads… I think that is the next dystopia to come.

      Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.

      source
      • msage@programming.dev ⁨1⁩ ⁨year⁩ ago

        Drink verification can

        source
      • jeanofthedead@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        Or all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners.

        source
        • -> View More Comments
      • proceduralnightshade@lemmy.ml ⁨1⁩ ⁨year⁩ ago

        “Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”

        source
        • -> View More Comments
      • cardfire@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.

        Would you like to order a delicious, refreshing Coke Zero™️?

        source
        • -> View More Comments
      • glitchdx@lemmy.world ⁨1⁩ ⁨year⁩ ago

        that is not a thought i needed in my brain just as i was trying to sleep.

        what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks

        source
    • theunknownmuncher@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”

      source
  • MuskyMelon@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Same type of addiction of people who think the Kardashians care about them or schedule their whole lives around going to Disneyland a few times a year.

    source
  • N0body@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

    people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI

    Preying on the vulnerable is a feature, not a bug.

    source
    • Vespair@lemm.ee ⁨1⁩ ⁨year⁩ ago

      And it’s beyond obvious in the way LLMs are conditioned, especially if you’re used them long enough to notice trends. Where early on their responses were straight to the point (inaccurate as hell, yes, but that’s not what we’re talking about in this case) today instead they are meandering and full of straight engagement bait - programmed to feign some level of curiosity and ask stupid and needless follow-up questions to “keep the conversation going.” I suspect this is just a way to increase token usage to further exploit and drain the whales who tend to pay for these kinds of services, personally.

      There is no shortage of ethical quandaries brought into the world with the rise of LLMs, but in my opinion the locked-down nature of these systems is one of the most problematic; if LLMs are going to be the commonality it seems the tech sector is insistent on making happen, then we really need to push back on these companies being able to control and guide them in their own monetary interests.

      source
    • NostraDavid@programming.dev ⁨1⁩ ⁨year⁩ ago

      That was clear from GPT-3, day 1.

      I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time…

      source
      • trotfox@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Ugh, that hit me hard. Poor lady. I hope it helped in some way.

        source
    • Deceptichum@quokk.au ⁨1⁩ ⁨year⁩ ago

      These same people would be dating a body pillow or trying to marry a video game character.

      The issue isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.

      source
      • tiguwang@lemm.ee ⁨1⁩ ⁨year⁩ ago

        Me and Serana are not just in love, we’re involved!

        Even if she’ s an ancient vampire.

        source
      • morrowind@lemmy.ml ⁨1⁩ ⁨year⁩ ago

        You labeling all lonely people losers is part of the problem

        source
        • -> View More Comments
      • Muaddib@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

        More ways to be an addict means more hooks means more addicts.

        source
    • Tylerdurdon@lemmy.world ⁨1⁩ ⁨year⁩ ago

      I kind of see it more as a sign of utter desperation on the human’s part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow’s experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.

      source
      • MouldyCat@feddit.uk ⁨1⁩ ⁨year⁩ ago

        a sign of utter desperation on the human’s part.

        Yes it seems to be the same underlying issue that leads some people to throw money at only fans streamers and such like. A complete starvation of personal contact that leads people to willingly live in a fantasy world.

        source
      • Paragone@piefed.social ⁨1⁩ ⁨year⁩ ago

        That utter-desparation is engineered into our civilization.

        What happens when you prevent the "inferiors" from having living-wage, while you pour wallowing-wealth on the executives?

        They have to overwork, to make ends meet, is what, which breaks parenting.

        Then, when you've broken parenting for a few generatios, the manufactured ocean-of-attachment-disorder manufactures a plethora of narcissism, which itself produces mass-shootings.

        2024 was down 200 mass-shootings, in the US of A, from the peak of 700/year, to only 500.

        You are seeing engineered eradication of human-worth, for moneyarchy.

        Isn't ruling-over-the-destruction-of-the-Earth the "greatest thrill-ride there is"?

        We NEED to do objective calibration of the harm that policies & political-forces, & put force against what is actually harming our world's human-viability.

        Not what the marketing-programs-for-the-special-interest-groups want us acting against, the red herrings..

        They're getting more vicious, we need to get TF up & begin fighting for our species' life.

        _ /\ _

        source
      • graphene@lemm.ee ⁨1⁩ ⁨year⁩ ago

        And the amount of connections and friends the average person has has been in free fall for decades…

        source
        • -> View More Comments
  • tisktisk@piefed.social ⁨1⁩ ⁨year⁩ ago

    I plugged this into gpt and it couldn't give me a coherent summary.
    Anyone got a tldr?

    source
    • tisktisk@piefed.social ⁨1⁩ ⁨year⁩ ago

      For those genuinely curious, I made this comment before reading only as a joke--had no idea it would be funnier after reading

      source
    • veeesix@lemmy.ca ⁨1⁩ ⁨year⁩ ago

      It’s short and worth the read, however:

      tl;dr you may be the target demographic of this study

      source
      • sugar_in_your_tea@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        Lol, now I’m not sure it’s the comment was satire. If so, bravo.

        source
        • -> View More Comments
  • DrDystopia@lemy.lol ⁨1⁩ ⁨year⁩ ago

    The digital Wilson.

    source
  • flamingo_pinyata@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

    But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

    But then there’s people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

    source
    • saltesc@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Yeah, the more I use it, the more I regret asking it for assistance. LLMs are the epitome of confidentiality incorrect.

      It’s good fun watching friends ask it stuff they’re already experienced in. Then the pin drops

      source
    • Opinionhaver@feddit.uk ⁨1⁩ ⁨year⁩ ago

      How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

      Talking with actual people online isn’t much better. ChatGPT might sound robotic, but it’s extremely polite, actually reads what you say, and responds to it. It doesn’t jump to hasty, unfounded conclusions about you based on tiny bits of information you reveal. When you’re wrong, it just tells you what you’re wrong about - it doesn’t call you an idiot and tell you to go read more. Even in touchy discussions, it stays calm and measured, rather than getting overwhelmed with emotion, which becomes painfully obvious in how people respond. The experience of having difficult conversations online is often the exact opposite. A huge number of people on message boards are outright awful to those they disagree with.

      Here’s a good example of the kind of angry, hateful message you’ll never get from ChatGPT - and honestly, I’d take a robotic response over that any day.

      I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

      source
      • pinkfluffywolfie@lemmy.world ⁨1⁩ ⁨year⁩ ago

        I agree with what you say, and I for one have had my fair share of shit asses on forums and discussion boards. But this response also fuels my suspicion that my friend group has started using it in place of human interactions to form thoughts, opinions, and responses during our conversations. Almost like an emotional crutch to talk in conversation, but not exactly? It’s hard to pin point.

        I’ve recently been tone policed a lot more over things that in normal real life interactions would be light hearted or easy to ignore and move on - I’m not shouting obscenities or calling anyone names, it’s just harmless misunderstandings that come from tone deafness of text. I’m talking like putting a cute emoji and saying words like silly willy is becoming offensive to people I know personally. It wasn’t until I asked a rhetorical question to invoke a thoughtful conversation where I had to think about what was even happening - someone responded with an answer literally from ChatGPT and they provided a technical definition to something that was apart of my question. Your answer has finally started linking things for me; for better or for worse people are using it because you don’t receive offensive or flamed answers. My new suspicion is that some people are now taking those answers, and applying the expectation to people they know in real life, and when someone doesn’t respond in the same predictable manner of AI they become upset and further isolated from real life interactions or text conversations with real people.

        source
        • -> View More Comments
      • musubibreakfast@lemm.ee ⁨1⁩ ⁨year⁩ ago

        Hey buddy, I’ve had enough of you and your sensible opinions. Meet me in the parking lot of the Wallgreens on the corner of Coursey and Jones Creek in Baton Rouge on april 7th at 10 p.m. We’re going to fight to the death, no holds barred, shopping cart combos allowed, pistols only, no scope 360, tag team style, entourage allowed.

        source
    • glitchdx@lemmy.world ⁨1⁩ ⁨year⁩ ago

      The fact that it’s not a person is a feature, not a bug.

      openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don’t like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very “fellow kids” energy. It’s also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.

      source
    • PattyMcB@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Don’t forget people who act like animals… addicts gonna addict

      source
    • victorz@lemmy.world ⁨1⁩ ⁨year⁩ ago

      At first glance I thought you wrote “inmate objects”, but I was not really relieved when I noticed what you actually wrote.

      source
    • macaw_dean_settle@lemmy.world ⁨1⁩ ⁨year⁩ ago

      You are clearly not using its advanced voice mode.

      source
    • Kolanaki@pawb.social ⁨1⁩ ⁨year⁩ ago

      If you’re also dumb, chatgpt seems like a super genius.

      source
      • endeavor@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

        I use chat gpt to find issues in my code when I am at my wits end. It is super smart, manages to find the typo I made in seconds.

        source
        • -> View More Comments