Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Something Bizarre Is Happening to People Who Use ChatGPT a Lot

⁨0⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨return2ozma@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://futurism.com/the-byte/chatgpt-dependence-addiction

source

Comments

Sort:hotnewtop
  • reksas@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

    i can feel it too when I use it. that is why i use it only for trivial things if at all.

    source
  • cupcakezealot@lemmy.blahaj.zone ⁨1⁩ ⁨year⁩ ago

    chatbots and ai are just dumber 1990s search engines.

    source
    • mycelium_underground@lemmy.world ⁨1⁩ ⁨year⁩ ago

      I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I’m pretty sure I can get better answers from the LLMs tied to Kagi search.

      AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a “search” engine like Google now).

      Don’t be the product.

      source
      • BlackAura@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Depending on what I needed I remember using AltaVista, AskJeeves, Dogpile, and I feel like later on MetaCrawler or something like that (would search multiple search engines for you and ordered them scored based on platform and relevancy iirc?)

        source
        • -> View More Comments
  • MehBlah@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Negative IQ points?

    source
  • PieMePlenty@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.

    source
    • Lifter@discuss.tchncs.de ⁨1⁩ ⁨year⁩ ago

      LLM is a subset of ML, which is a subset of AI.

      source
  • UraniumBlazer@lemm.ee ⁨1⁩ ⁨year⁩ ago

    Clickbait titles suck

    source
    • blind3rdeye@lemm.ee ⁨1⁩ ⁨year⁩ ago

      Something bizarre is happening to media organizations that use ‘clicks’ as a core metric.

      source
  • arotrios@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Image

    source
    • M33@lemmy.sdf.org ⁨1⁩ ⁨year⁩ ago

      It depends: are you in Soviet Russia ?

      source
      • arotrios@lemmy.world ⁨1⁩ ⁨year⁩ ago

        In the US, so as of 1/20/25, sadly yes.

        source
  • El_Azulito@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.

    source
    • aceshigh@lemmy.world ⁨1⁩ ⁨year⁩ ago

      At the store it calculated which peanuts were cheaper - 3 pound of shelled peanuts on sale, or 1 pound of no shell peanuts at full price.

      source
    • Fredthefishlord@lemmy.blahaj.zone ⁨1⁩ ⁨year⁩ ago

      That’s… Impressively braindead

      source
      • El_Azulito@lemmy.world ⁨1⁩ ⁨year⁩ ago

        That’s the joke!

        source
  • gamer@lemm.ee ⁨1⁩ ⁨year⁩ ago

    That is peak clickbait, bravo.

    source
  • StereoCode@lemmy.world ⁨1⁩ ⁨year⁩ ago

    People addicted to tech omg who could’ve guessed. Shocked I tell you.

    source
  • sebinspace@lemmy.world ⁨1⁩ ⁨year⁩ ago

    lmao we’re so fucked :D

    source
  • Krudler@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Not a lot of meat on this article, but yeah, I think it’s pretty obvious that those who seek automated tools to define their own thoughts and feelings become dependent. If one is so incapable of mapping out ones thoughts and putting them to written word, its natural they’d seek ease and comfort with the “good enough” (fucking shitty as hell) output of a bot.

    source
    • Korhaka@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      I mainly use it for corporate wankery messages. The output is bullshit and I kinda wonder how many of my co-workers genuinely believe in it and how many see the bullshit.

      source
      • Krudler@lemmy.world ⁨1⁩ ⁨year⁩ ago

        People who use it are intuitively unaware that it is shit. You can’t have a photocopy of a photocopy of a photocopy of a picture of a picture of a copy of a hand-drawn-facsimile and expect anything but the lowest-resolution wet feces-word-salad.

        source
  • Zip2@feddit.uk ⁨1⁩ ⁨year⁩ ago

    I couldn’t be bothered to read it, so I got CharGPT to summarise it. Apparently there’s nothing to worry about.

    source
    • interdimensionalmeme@lemmy.ml ⁨1⁩ ⁨year⁩ ago

      You didn’t, I did

      The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed “power users” — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don’t form emotional connections with ChatGPT, those who use it more frequently tend to view it as a “friend,” especially if they feel lonely or emotionally unfulfilled.

      Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

      You said:

      Make a list of the main points and for each explain why it will be worse when it is society wide

      ChatGPT said:

      Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

      Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
      
      Why it's worse at scale:
      
      If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.
      
      Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
      
      Why it's worse at scale:
      
      A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.
      
      Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
      
      Why it's worse at scale:
      
      This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.
      
      People stress out over subtle changes in ChatGPT’s behavior:
      
      Why it's worse at scale:
      
      If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.
      
      Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
      
      Why it's worse at scale:
      
      If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.
      
      Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
      
      Why it's worse at scale:
      
      This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.
      
      Prolonged usage increases risk of emotional dependence, regardless of how it's used:
      
      Why it's worse at scale:
      
      As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
      

      Let me know if you want this formatted for a presentation, policy paper, or discussion.

      source
      • liv@lemmy.nz ⁨1⁩ ⁨year⁩ ago

        That’s really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.

        Namely, the increased power of AI’s big tech company owners over people that emotional dependence creates.

        It’s not as if these concepts aren’t widely discussed online, everything from Meta’s emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.

        source
        • -> View More Comments
      • FiskFisk33@startrek.website ⁨1⁩ ⁨year⁩ ago

        What’s the point of a summary that’s longer than the article itself?

        source
        • -> View More Comments
      • interdimensionalmeme@lemmy.ml ⁨1⁩ ⁨year⁩ ago

        You said: Take the same list again, for each point explain why it’s actually beneficial and how it will get better with increased adoption

        ChatGPT said:

        Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:

        Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
        
        Why it could be beneficial:
        
        If people are turning to ChatGPT frequently, it may indicate that it's providing meaningful value — like reducing cognitive load, aiding creativity, or offering emotional support. As usage becomes more common, society may develop healthier digital habits, just as it has with smartphones and social media. Over time, norms and tools (e.g., usage timers, digital wellness features) could help people manage their use responsibly.
        
        Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
        
        Why it could be beneficial:
        
        For people lacking companionship or emotional support, especially the elderly or isolated, AI companions can provide comfort, conversation, and a sense of presence. On a wider scale, AI could reduce loneliness-related health problems and even support mental wellness initiatives by being an accessible form of care or early intervention.
        
        Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
        
        Why it could be beneficial:
        
        If AI offers a low-barrier outlet for the lonely, it might be a first step toward re-engagement with the world. AI can encourage people to express feelings they wouldn't share with others and even prompt them to seek human connection, therapy, or support services. Widespread AI use could destigmatize conversations about loneliness and mental health.
        
        People stress out over subtle changes in ChatGPT’s behavior:
        
        Why it could be beneficial:
        
        This shows users are emotionally invested — which can push developers to create more consistent, emotionally intelligent systems. If society recognizes the emotional impact of AI, it will encourage better design, transparency, and regulation. Over time, this can lead to more stable and human-aware AI that positively complements emotional needs.
        
        Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
        
        Why it could be beneficial:
        
        This gives users multiple channels of expression to suit their mood or needs — voice for brief emotional boosts, text for deeper reflection. With broader adoption, people can tailor AI interaction styles to maximize personal benefit. This insight also allows designers to create features that better match emotional and mental health goals.
        
        Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
        
        Why it could be beneficial:
        
        This means AI is becoming a highly trusted tool, even in practical domains. As more people adopt it for brainstorming, research, or decision-making, we’ll likely see a democratization of high-level thinking tools. Over time, dependency could evolve into partnership — similar to how calculators or GPS became essential but ultimately empowering tools.
        
        Prolonged usage increases risk of emotional dependence, regardless of how it's used:
        
        Why it could be beneficial:
        
        Long-term usage can foster deeper understanding of one’s own thought processes, communication patterns, and emotional states. As AI integrates into daily life, people might develop greater self-awareness and reflection through their interactions. Society could benefit from individuals who are more emotionally literate and thoughtful, even if that awareness is sparked by an AI relationship.
        

        Let me know if you’d like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.

        You said: Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid

        ChatGPT said:

        Alright, here’s the breakdown of that critique — the “stochastic parrot” idea and the comparison to advanced autocomplete — explained clearly and fairly:

        1. What does “stochastic parrot” mean?

        The term “stochastic parrot” was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:

        "Stochastic" means based on probability. The model doesn’t “think” or “understand”; it predicts the next word based on statistical likelihood, given the words before it.
        
        "Parrot" because it doesn’t generate new ideas—it repeats patterns and phrases it has seen in training data, like a parrot mimics human speech.
        

        So, calling ChatGPT a stochastic parrot is a way of saying: It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.

        1. It can’t say anything fundamentally new — why?

        By design, ChatGPT:

        Is trained on vast amounts of existing text from books, websites, forums, and more.
        
        Learns the statistical patterns of how humans use language.
        
        Generates responses by predicting the next most likely word or phrase based on that training.
        

        Because it’s not connected to lived experience, emotion, consciousness, or intent, it:

        Can’t invent genuinely new concepts.
        
        Can’t form subjective opinions.
        
        Can’t create knowledge that isn’t derived from patterns it has already seen.
        

        It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.

        1. It’s like “autocomplete on steroids” — how so?

        If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you…” and it might suggest “…have a great day.”

        ChatGPT is essentially the same mechanism:

        Autocomplete predicts the next word based on simple models.
        
        ChatGPT predicts sequences of words using a far more complex, nuanced statistical model trained on billions of examples.
        

        So e:

        It can hold context better.
        
        It can imitate different styles.
        
        But it's still guessing the next words, not “thinking” or “understanding.”
        

        In summary:

        Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:

        It doesn’t understand meaning.
        
        It doesn’t create knowledge.
        
        It reflects patterns in language, not consciousness or intelligence.
        

        It’s powerful and useful — but it’s important not to mistake fluency for understanding.

        Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.

        source
  • starman2112@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

    I am so happy God made me a Luddite

    source
    • WaitThisIsntReddit@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Yeah look at all this technology you can’t use! It’s so empowering.

      source
      • starman2112@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        Can, and opt not to. Big difference. Like, I’m sure it’s very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else’s computer turns off, I’d like to keep control over my things

        Same with AI. I’m sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own

        source
  • Itzdan@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Neat snaps camera

    source
  • HappinessPill@lemmy.ml ⁨1⁩ ⁨year⁩ ago

    Do you guys remember when internet was the thing and everybody was like: “Look those dumb fucks just putting everything online” and now is: “Look this weird motherfucker so not post anything online”

    source
    • TheBat@lemmy.world ⁨1⁩ ⁨year⁩ ago

      I remember when internet was a place

      Image

      source
      • grrgyle@slrpnk.net ⁨1⁩ ⁨year⁩ ago

        I’m trying to get back to that. Actually close to it now than I was 5 years ago, so that’s cool

        source
        • -> View More Comments
    • NikkiDimes@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Remember when people used to say and believe “Don’t believe everything you read on the internet?”

      I miss those days.

      source
    • Dasus@lemmy.world ⁨1⁩ ⁨year⁩ ago

      I remember when the Internet was a thing people went on and/or visited/surfed, but not something you’d imagine having 247.

      source
      • Isthisreddit@lemmy.world ⁨1⁩ ⁨year⁩ ago

        I was there from the start, you must of never BBS’d or IRC’d - shit was amazing in the early days.

        I mean honestly nothing has really changed - we are still at our terminals looking at text. Only real innovation has been inline pics, videos and audio. 30+ years ago one had to click a link to see that stuff

        source
        • -> View More Comments
  • Critical_Thinker@lemm.ee ⁨1⁩ ⁨year⁩ ago

    now replace chatgpt with these terms, one by one:

    • the internet
    • google
    • facebook
    • instagram
    • tiktok
    • reddit
    • lemmy
    • their cell phone
    • news media
    • television
    • radio
    • podcasts
    • junk food
    • money
    source
    • Dasus@lemmy.world ⁨1⁩ ⁨year⁩ ago

      You go down a list if inventions pretty progressively, skimming the best of the last century or at most two.

      The skip to currency, which is several millenia old.

      source
      • Korhaka@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

        It all went wrong when we switched to bronze. Should have kept to flint.

        source
        • -> View More Comments
      • rmuk@feddit.uk ⁨1⁩ ⁨year⁩ ago

        They’re clearly under the control of Big Train, Loom Lobbyists and the Global Gutenberg Printing Press Conspiracy.

        source
        • -> View More Comments
  • LovableSidekick@lemmy.world ⁨1⁩ ⁨year⁩ ago

    TIL becoming dependent on a tool you frequently use is “something bizarre” - not the extremely ordinary result you would expect with common sense.

    source
    • Korhaka@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

      Plumbers too reliant on pipes

      source
    • emeralddawn45@discuss.tchncs.de ⁨1⁩ ⁨year⁩ ago

      If you actually read the article Im 0retty sure the bizzarre thing is really these people using a ‘tool’ forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a ‘friend’.

      source
      • WaitThisIsntReddit@lemmy.world ⁨1⁩ ⁨year⁩ ago

        You never viewed a tool as a friend? Pretty sure there are some guys that like their cars more than most friends. Bonding with objects isn’t that weird, especially one that can talk to you like it’s human.

        source
        • -> View More Comments
      • TrojanRoomCoffeePot@lemmy.world ⁨1⁩ ⁨year⁩ ago

        What the Hell was the name of the movie with Tom Cruise where the protagonist’s friend was dating a fucking hologram?

        We’re a hair’s-breadth from that bullshit, and TBH I think that if falling in love with a computer program becomes the new defacto normal, I’m going to completely alienate myself by making fun of those wretched chodes non-stop.

        source
        • -> View More Comments
      • LovableSidekick@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Yes, it says the neediest people are doing that, not simply “people who who use ChatGTP a lot”. This article is like “Scientists warn civilization-killer asteroid could hit Earth” and the article clarifies that they estimate like 0.3% chance of impact.

        source
      • CoffeeJunkie@lemmy.cafe ⁨1⁩ ⁨year⁩ ago

        No, I basically get the same read as OP. Idk I like to think I’m rational enough & don’t take things too far, but I like my car. I like my tools, people just get attached to things we like.

        Give it an almost human, almost friend type interaction & yes I’m not surprised at all some people, particularly power users, are developing parasocial attachments or addiction to this non-human tool. I don’t call my friends. I text. ¯\(°_o)/¯

        source
        • -> View More Comments
  • jamie_oliver@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I knew a guy I went to rehab with. Talked to him a while back and he invited me to his discord server. It was him, and like three self trained LLMs and a bunch of inactive people who he had invited like me. He would hold conversations with the LLMs like they had anything interesting or human to say, which they didn’t. Honestly a very disgusting image, I left because I figured he was on the shit again and had lost it and didn’t want to get dragged into anything.

    source
    • cortex7979@lemm.ee ⁨1⁩ ⁨year⁩ ago

      Jesus that’s sad

      source
      • jamie_oliver@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Yeah. I tried talking to him about his AI use but I realized there was no point. I know from experience you can’t convince addicts they are addicted to anything. People need to realize that themselves.

        source
        • -> View More Comments
  • RizzRustbolt@lemmy.world ⁨1⁩ ⁨year⁩ ago

    Brain bleaching?

    source
  • satans_methpipe@lemmy.world ⁨1⁩ ⁨year⁩ ago

    New DSM / ICD is dropping with AI dependency. But it’s unreadable because image generation was used for the text.

    source
    • Cgers@lemmy.dbzer0.com ⁨1⁩ ⁨year⁩ ago

      This is perfect for the billionaires in control, now if you suggest that “hey maybe these AI have developed enough to be sentient and sapient beings (not saying they are now) and probably deserve rights”, they can just label you (and that arguement) mentally ill

      Foucault laughs somewhere

      source
  • dumples@midwest.social ⁨1⁩ ⁨year⁩ ago

    This makes a lot of sense because as we have been seeing over the last decades or so is that digital only socialization isn’t a replacement for in person socialization. Increased social media usage shows increased loneliness not a decrease. It makes sense that something even more fake like ChatGPT would make it worse.

    I don’t want to sound like a luddite but overly relying on digital communications for all interactions is a poor substitute for in person interactions. I know I have to prioritize seeing people in the real world because I work from home and spending time on Lemmy during the day doesn’t fulfill.

    source
    • ApatheticCactus@lemmy.world ⁨1⁩ ⁨year⁩ ago

      In person socialization? Is that like VR chat?

      source
  • glibg@lemmy.ca ⁨1⁩ ⁨year⁩ ago

    Image

    source
    • theterrasque@infosec.pub ⁨1⁩ ⁨year⁩ ago

      The quote was originally on news and journalists.

      source
      • DeltaWingDragon@sh.itjust.works ⁨11⁩ ⁨months⁩ ago

        The phenomenon is called Gell-Mann amnesia

        source
      • Korhaka@sopuli.xyz ⁨1⁩ ⁨year⁩ ago

        I remember thinking this when I was like 15. Every time they mentioned tech, wtf this is all wrong! Then a few other topics, even ones I only knew a little about, so many inaccuracies.

        source
    • LovableSidekick@lemmy.world ⁨1⁩ ⁨year⁩ ago

      An even more disturbing realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let’s not think about that either.

      source
      • Cryophilia@lemmy.world ⁨1⁩ ⁨year⁩ ago

        AI Bad!

        Yes, it is. But not in, like a moral sense. It’s just not good at doing things.

        source
      • starman2112@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        This is a salient point that’s well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It’s super easy to call out a bad research study and have it retracted. But you can’t just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they’re synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.

        source
      • Shanmugha@lemmy.world ⁨1⁩ ⁨year⁩ ago

        I’ll bait. Let’s think: -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

        • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output
        • now llm is asked about the topic and computes the answer string

        By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)

        If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

        source
        • -> View More Comments
  • Chastity2323@midwest.social ⁨1⁩ ⁨year⁩ ago

    Image

    source
    • inb4_FoundTheVegan@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Jfc, I didn’t even know who Grummz was until yesterday but gawdamn that is some nuclear cringe.

      source
      • nodiratime@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Who is it? Seems like a grade A retard.

        source
        • -> View More Comments
      • alt_xa_23@lemmy.world ⁨1⁩ ⁨year⁩ ago

        That’s a pretty good summary of Grummz

        source
    • GnuLinuxDude@lemmy.ml ⁨1⁩ ⁨year⁩ ago

      The way brace’s brain works is something else lol

      source
  • RememberTheApollo_@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things. I got an AI search response just yesterday that dramatically understated an issue by citing an unscientific ideologically based website with a on open reason to minimize said issue. The actual studies showed a 6x difference. It was blatant AF, and I can’t understand why anyone would rely on such a system for reliable, objective information or responses.

    source
    • WaitThisIsntReddit@lemmy.world ⁨1⁩ ⁨year⁩ ago

      That’s why I only use it as a starting point. It spits out “keywords” and a fuzzy gist of what I need, then I can verify or experiment on my own. It’s just a good place to start or a reminder of things you once knew.

      source
      • InEnduringGrowStrong@sh.itjust.works ⁨1⁩ ⁨year⁩ ago

        An LLM is like taking to a rubber duck on drugs while also being on drugs.

        source
    • cley_faye@lemmy.world ⁨1⁩ ⁨year⁩ ago

      I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things

      Ahem. Weren’t there an election recently, in some big country, with uncanny similitude with that?

      source
      • RememberTheApollo_@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Yeah. Got me there.

        source
    • hansolo@lemm.ee ⁨1⁩ ⁨year⁩ ago

      I lime to use GPT to create practice tests for certification tests. Even if I give it very specific guidance to double check what it thinks is a correct answer, it will gladly tell me I got questions wrong and I will have to ask it to triple check the right answer, which is what I actually answered.

      source
      • RememberTheApollo_@lemmy.world ⁨1⁩ ⁨year⁩ ago

        And in that amount of time it probably would have been just as easy to type up a correct question and answer rather than try to repeatedly corral an AI into checking itself for an answer you already know. Your method works for you because you have the knowledge. The problem lies with people who don’t and will accept and use incorrect output.

        source
  • az04@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I need to read Amusing Ourselves to Death…

    source
    • utopiah@lemmy.world ⁨1⁩ ⁨year⁩ ago

      My notes on it fabien.benetou.fr/…/AmusingOurselvesToDeath

      But yes, stop scrolling, read it.

      source
  • Squizzy@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I know we generally hate AI and I do for creativity or cutting jobs but chatgpt is really handy for searches like “family attractions near me”. Where I live these events are sporadic and not generally visible on the likes of ticketmaster - even if they were the website is terrible for browsing events.

    source
    • RaoulDook@lemmy.world ⁨1⁩ ⁨year⁩ ago

      That’s just a web search, we already have had that for decades and it didn’t require nuclear-powered datacenters

      source
      • Squizzy@lemmy.world ⁨1⁩ ⁨year⁩ ago

        Except it isnt, it is aggregating the information into a single response and providing better results. I found events I could not find through search engines.

        Not everything bad is all bad.

        source
  • Flemmy@lemm.ee ⁨1⁩ ⁨year⁩ ago

    And sunshine hurts.

    source
    • AizawaC47@lemm.ee ⁨1⁩ ⁨year⁩ ago

      Said the vampire from Transylvania.

      source
  • ExLisper@lemmy.curiana.net ⁨1⁩ ⁨year⁩ ago

    Wake me up when you find something people will not abuse and get addicted to.

    source
    • potoo22@lemmy.world ⁨1⁩ ⁨year⁩ ago

      The modern era is dopamine machines

      source
    • TronBronson@lemmy.world ⁨1⁩ ⁨year⁩ ago

      Fren that is nature of humanity

      source
  • MTK@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I know a few people who are genuinely smart but got so deep into the AI fad that they are now using it almost exclusively.

    They seem to be performing well, which is kind of scary, but sometimes they feel like MLM people with how pushy they are about using AI.

    source
    • slaneesh_is_right@lemmy.org ⁨1⁩ ⁨year⁩ ago

      Most people don’t seem to understand how “dumb” ai is. And it’s scary when i read shit like that they use ai for advice.

      source
      • piecat@lemmy.world ⁨1⁩ ⁨year⁩ ago

        People also don’t realize how incredibly stupid humans can be. I don’t mean that in a judgemental or moral kind of way, I mean that the educational system has failed a lot of people.

        There’s some % of people that could use AI for every decision in their lives and the outcome would be the same or better.

        That’s even more terrifying IMO.

        source
        • -> View More Comments
  • b1tstremist0@lemmy.world ⁨1⁩ ⁨year⁩ ago

    I tried that Replika app before AI was trendy and immediately picked on the fact that AI companion thing is literal garbage.

    Maybe about time we listen to that internet wisdom about touching some grass.

    source
-> View More Comments