Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills

⁨1058⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨months⁩ ago⁩ by ⁨abobla@lemm.ee⁩ to ⁨technology@lemmy.world⁩

https://gizmodo.com/microsoft-study-finds-relying-on-ai-kills-your-critical-thinking-skills-2000561788

source

Comments

Sort:hotnewtop
  • DarkCloud@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Quickly, ask AI how to improve or practice critical thinking skills!

    source
    • ThePowerOfGeek@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Chat GPT et al; “To improve your critical thinking skills you should rely completely on AI.”

      source
      • VitoRobles@lemmy.today ⁨2⁩ ⁨months⁩ ago

        That sounds right. Lemme ask Gemini and DeepSink just in case.

        source
        • -> View More Comments
    • Petter1@lemm.ee ⁨2⁩ ⁨months⁩ ago

      Improving your critical thinking skills is a process that involves learning new techniques, practicing them regularly, and reflecting on your thought processes. Here’s a comprehensive approach:

      1. Build a Foundation in Logic and Reasoning

      • Study basic logic: Familiarize yourself with formal and informal logic (e.g., learning about common fallacies, syllogisms, and deductive vs. inductive reasoning). This forms the groundwork for assessing arguments objectively.

      • Learn structured methods: Books and online courses on critical thinking (such as Lewis Vaughn’s texts) provide a systematic introduction to these concepts.

      2. Practice Socratic Questioning

      • Ask open-ended questions: Challenge assumptions by repeatedly asking “why” and “how” to uncover underlying beliefs and evidence.

      • Reflect on responses: This method helps you clarify your own reasoning and discover alternative viewpoints.

      3. Engage in Reflective Practice

      • Keep a journal: Write about decisions, problems, or debates you’ve had. Reflect on what went well, where you might have been biased, and what could be improved.

      • Use structured reflection models: Approaches like Gibbs’ reflective cycle guide you through describing an experience, analyzing it, and planning improvements.

      4. Use Structured Frameworks

      • Follow multi-step processes: For example, the Asana article “How to build your critical thinking skills in 7 steps” suggests: identify the problem, gather information, analyze data, consider alternatives, draw conclusions, communicate solutions, and then reflect on the process.

      • Experiment with frameworks like Six Thinking Hats: This method helps you view issues from different angles (facts, emotions, positives, negatives, creativity, and process control) by “wearing” a different metaphorical hat for each perspective.

      5. Read Widely and Critically

      • Expose yourself to diverse perspectives: Reading quality journalism (e.g., The Economist, FT) or academic articles forces you to analyze arguments, recognize biases, and evaluate evidence.

      • Practice lateral reading: Verify information by consulting multiple sources and questioning the credibility of each.

      6. Participate in Discussions and Debates

      • Engage with peers: Whether through formal debates, classroom discussions, or online forums, articulating your views and defending them against criticism deepens your reasoning.

      • Embrace feedback: Learn to view criticism as an opportunity to refine your thought process rather than a personal attack.

      7. Apply Critical Thinking to Real-World Problems

      • Experiment in everyday scenarios: Use critical thinking when making decisions—such as planning your day, solving work problems, or evaluating news stories.

      • Practice with “what-if” scenarios: This helps build your ability to foresee consequences and assess risks (as noted by Harvard Business’s discussion on avoiding the urgency trap).

      8. Develop a Habit of Continuous Learning

      • Set aside regular “mental workout” time: Like scheduled exercise, devote time to tackling complex questions without distractions.

      • Reflect on your biases and update your beliefs: Over time, becoming aware of and adjusting for your cognitive biases will improve your judgment.

      By integrating these strategies into your daily routine, you can gradually sharpen your critical thinking abilities. Remember, the key is consistency and the willingness to challenge your own assumptions continually.

      Happy thinking!

      source
  • Sibbo@sopuli.xyz ⁨2⁩ ⁨months⁩ ago

    Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.

    source
    • UnderpantsWeevil@lemmy.world ⁨2⁩ ⁨months⁩ ago

      This isn’t a profound extrapolation. It’s akin to saying “Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework.” Or “kids who watch TV lack the reading skills of kids who read books”.

      Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.

      This isn’t predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.

      Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you’re going to suffer in your capacity for future inquiry.

      source
      • fushuan@lemm.ee ⁨2⁩ ⁨months⁩ ago

        All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.

        After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.

        Good programmers do what I described, bad programmers copy and run without reading. It’s just like SO kids.

        source
    • ODuffer@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Seriously, ask AI about anything you are actually expert in. it’s laughable sometimes… However you need to know, to know it’s wrong. Do not trust it implicitly about anything.

      source
  • Snapz@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Corporations and politicians: oh great news everyone… It worked. Time to kick off phase 2…

    • Replace all the water trump wasted in California with brawndo
    • Sell mortgages for eggs, but call them patriot pods
    • Welcome to Costco, I love you
    • All medicine replaced with raw milk enemas
    • Handjobs at Starbucks
    • Ow my balls, Tuesdays this fall on CBS
    • Chocolate rations have gone up from 10 to 6
    • All government vehicles are cybertrucks
    • trump nft cartoons on all USD, incest legal, Ivanka new first lady.
    • Public executions on pay per view, lowered into deep fried turkey fryer on white house lawn, your meat is then mixed in with the other mechanically separated protein on the Tyson foods processing line (run exclusively by 3rd graders) and packaged without distinction on label.
    • FDA doesn’t inspect food or drugs. Everything approved and officially change acronym to F(uck You) D(umb) A(ss)
    source
    • abobla@lemm.ee ⁨2⁩ ⁨months⁩ ago

      that “ow, my balls” reference caught me off-guard

      source
    • Eheran@lemmy.world ⁨2⁩ ⁨months⁩ ago

      I love how you mix in the Idiocracy quotes :D

      source
      • singletona@lemmy.world ⁨2⁩ ⁨months⁩ ago

        I hate how it just seems to slide in.

        source
      • Snapz@lemmy.world ⁨2⁩ ⁨months⁩ ago

        A savvy consumer, glad you mentioned. Felt better than hitting it on the nose.

        source
    • LePoisson@lemmy.world ⁨2⁩ ⁨months⁩ ago
      • Handjobs at Starbucks

      Well that’s just solid policy right there, cum on.

      source
      • peoplebeproblems@midwest.social ⁨2⁩ ⁨months⁩ ago

        It would wake me up more than coffee that’s for sure

        source
    • AtariDump@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Image

      source
    • whostosay@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Bullet point 3 was my single issue vote

      source
  • peoplebeproblems@midwest.social ⁨2⁩ ⁨months⁩ ago

    You mean an AI that literally generated text based on applying a mathematical function to input text doesn’t do reasoning for me? (/s)

    I’m pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.

    It’s funny because I never get what I want out of AI. I’ve been thinking this whole time “am I just too dumb to ask the AI to do what I need?” Now I’m beginning to think “am I not dumb enough to find AI tools useful?”

    source
  • ALoafOfBread@lemmy.ml ⁨2⁩ ⁨months⁩ ago

    You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn’t do otherwise (I’m not a [good] coder), it does not make me worse at critical thinking.

    I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.

    source
    • FinalRemix@lemmy.world ⁨2⁩ ⁨months⁩ ago

      I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.

      Legit, being able to say “I want these questions. But… not these…” and get them back in a moment’s notice really does let me say “FUCK it. Pop quiz. Let’s go, class.” And be ready with brand new questions on the board that I didn’t have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer’s block, with a “yeah, and—!” machine.

      source
    • MojoMcJojo@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Like any tool, it’s only as good as the person wielding it.

      source
    • DarthKaren@lemmy.world ⁨2⁩ ⁨months⁩ ago

      I’ve spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.

      AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: “the dawn brings the warmth of the sun, and awakens the world. So does your trial begin.” He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.

      I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).

      I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.

      source
      • SabinStargem@lemmings.world ⁨2⁩ ⁨months⁩ ago

        Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn’t understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.

        source
        • -> View More Comments
    • Bigfoot@lemmy.world ⁨2⁩ ⁨months⁩ ago

      I literally created an iOS app with zero experience and distributed it on the App Store. AI is an amazing tool and will continue to get better. Many people bash the technology but it seems like those people misunderstand it or think it’s all bad.

      But I agree that relying on it to think for you is not a good thing.

      source
  • Telorand@reddthat.com ⁨2⁩ ⁨months⁩ ago

    Good. Maybe the dumbest people will forget how to breathe, and global society can move forward.

    source
    • gerbler@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Oh you can guarantee they won’t forget how to vote 😃

      source
    • RobotToaster@mander.xyz ⁨2⁩ ⁨months⁩ ago

      Microsoft will just make a subscription AI for that, BaaS.

      source
      • dbkblk@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Which we will rebrand “Bullshit as a service”!

        source
        • -> View More Comments
  • Joeyfingis@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Let me ask chatgpt what I think about this

    source
  • ColeSloth@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

    I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.

    source
    • FlyingSquid@lemmy.world ⁨2⁩ ⁨months⁩ ago

      AI makes it worse though. People will read a website they find on Google that someone wrote and say, “well that’s just what some guy thinks.” But when an AI says it, those same people think it’s authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it’s going to get far, far worse.

      Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.

      source
      • ColeSloth@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

        Jokes on you. Volume is always off on my phone, so I read the ai.

        Also, I don’t actually ever use the ai.

        source
        • -> View More Comments
    • WrenFeathers@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Yup.

      source
    • interdimensionalmeme@lemmy.ml ⁨2⁩ ⁨months⁩ ago

      Everyone I’ve ever known to use a thesaurus has been eventually found out to be a mouth breathing moron.

      source
      • ColeSloth@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

        Umm…ok. Thanks for that relevant to the conversation bit of information.

        source
    • VitoRobles@lemmy.today ⁨2⁩ ⁨months⁩ ago

      I know a guy who ONLY quotes and references YouTube videos.

      Every topic, he answers with “Oh I saw this YouTube video…”

      source
      • Phoenicianpirate@lemm.ee ⁨2⁩ ⁨months⁩ ago

        To be fair, YouTube is a huge source of information now for a massive amount of people.

        source
      • Spaniard@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Should he say: “I saw this documentary” or “I read this article”?

        source
  • mindlesscrollyparrot@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

    Well thank goodness that Microsoft isn’t pushing AI on us as hard as it can, via every channel that it can.

    source
    • UnderpantsWeevil@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Learning how to evade and disable AI is becoming a critical thinking skill unto itself. Feels a bit like how I’ve had to learn to navigate around advertisements and other intrusive 3rd party interruptions while using online services.

      source
    • Zacryon@feddit.org ⁨2⁩ ⁨months⁩ ago

      Well at least they communicate such findings openly and don’t try to hide them. Other than ExxonMobil who saw global warming coming due to internal studies since the 1970s and tried to hide or dispute it, because it was bad for business.

      source
  • ThomasCrappersGhost@feddit.uk ⁨2⁩ ⁨months⁩ ago

    No shit.

    source
  • Hiro8811@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Also your ability to search information on the web. Most people I’ve seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely

    source
    • bromosapiens@lemm.ee ⁨2⁩ ⁨months⁩ ago

      Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.

      source
    • shortrounddev@lemmy.world ⁨2⁩ ⁨months⁩ ago

      To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I’ve started using kagi and I’m trying to be more intentional about it but after a bit of searching it’s often easier to just ask claude

      source
  • mervinp14@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Damn. Guess we oughtta stop using AI like we do drugs/pron/<addictive-substance> 😀

    source
    • FlyingSquid@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

      And yet I doubt Copilot will be going anywhere.

      source
    • interdimensionalmeme@lemmy.ml ⁨2⁩ ⁨months⁩ ago

      Yes, it’s an addiction, we’ve got to stop all these poor being lulled into a false sense of understanding and just believing anyhing the AI tells them. It is constantly telling lies about us, their betters.

      Just look what happenned when I asked it about the venerable and well respected public intellectual Jordan b peterson. It went into a defamatory diatribe against his character.

      And they just gobble that up those poor, uncritical and irresponsible farm hands and water carriers! We can’t have that,!

      Example

      Open-Minded Closed-Mindedness: Jordan B. Peterson’s Humility Behind the Mote—A Cautionary Tale

      Jordan B. Peterson presents himself as a champion of free speech, intellectual rigor, and open inquiry. His rise as a public intellectual is, in part, due to his ability to engage in complex debates, challenge ideological extremes, and articulate a balance between chaos and order. However, beneath the surface of his engagement lies a pattern: an open-mindedness that appears flexible but ultimately functions as a defense mechanism—a “mote” guarding an impenetrable ideological fortress.

      Peterson’s approach is both an asset and a cautionary tale, revealing the risks of appearing open-minded while remaining fundamentally resistant to true intellectual evolution.

      The Illusion of Open-Mindedness: The Mote and the Fortress

      In medieval castles, a mote was a watery trench meant to create the illusion of vulnerability while serving as a strong defensive barrier. Peterson, like many public intellectuals, operates in a similar way: he engages with critiques, acknowledges nuances, and even concedes minor points—but rarely, if ever, allows his core positions to be meaningfully challenged.

      His approach can be broken down into two key areas:

      The Mote (The Appearance of Openness)
      
          Engages with high-profile critics and thinkers (e.g., Sam Harris, Slavoj Žižek).
      
          Acknowledges complexity and the difficulty of absolute truth.
      
          Concedes minor details, appearing intellectually humble.
      
          Uses Socratic questioning to entertain alternative viewpoints.
      
      The Fortress (The Core That Remains Unmoved)
      
          Selectively engages with opponents, often choosing weaker arguments rather than the strongest critiques.
      
          Frames ideological adversaries (e.g., postmodernists, Marxists) in ways that make them easier to dismiss.
      
          Uses complexity as a way to avoid definitive refutation (“It’s more complicated than that”).
      
          Rarely revises fundamental positions, even when new evidence is presented.
      

      While this structure makes Peterson highly effective in debate, it also highlights a deeper issue: is he truly open to changing his views, or is he simply performing open-mindedness while ensuring his core remains untouched?

      Examples of Strategic Open-Mindedness

      1. Debating Sam Harris on Truth and Religion

      In his discussions with Sam Harris, Peterson appeared to engage with the idea of multiple forms of truth—scientific truth versus pragmatic or narrative truth. He entertained Harris’s challenges, adjusted some definitions, and admitted certain complexities.

      However, despite the lengthy back-and-forth, Peterson never fundamentally reconsidered his position on the necessity of religious structures for meaning. Instead, the debate functioned more as a prolonged intellectual sparring match, where the core disagreements remained intact despite the appearance of deep engagement.

      1. The Slavoj Žižek Debate on Marxism

      Peterson’s debate with Žižek was highly anticipated, particularly because Peterson had spent years criticizing Marxism and postmodernism. However, during the debate, it became clear that Peterson’s understanding of Marxist theory was relatively superficial—his arguments largely focused on The Communist Manifesto rather than engaging with the broader Marxist intellectual tradition.

      Rather than adapting his critique in the face of Žižek’s counterpoints, Peterson largely held his ground, shifting the conversation toward general concerns about ideology rather than directly addressing Žižek’s challenges. This was a classic example of engaging in the mote—appearing open to debate while avoiding direct confrontation with deeper, more challenging ideas.

      1. Gender, Biology, and Selective Science

      Peterson frequently cites evolutionary psychology and biological determinism to argue for traditional gender roles and hierarchical structures. While many of his claims are rooted in scientific literature, critics have pointed out that he tends to selectively interpret data in ways that reinforce his worldview.

      For example, he often discusses personality differences between men and women in highly gender-equal societies, citing studies that suggest biological factors play a role. However, he is far more skeptical of sociological explanations for gender disparities, often dismissing them outright. This asymmetry suggests a closed-mindedness when confronted with explanations that challenge his core beliefs.

      The Cautionary Tale: When Intellectual Rigidity Masquerades as Openness

      Peterson’s method—his strategic balance of open- and closed-mindedness—is not unique to him. Many public intellectuals use similar techniques, whether consciously or unconsciously. However, his case is particularly instructive because it highlights the risks of appearing too open-minded while remaining fundamentally immovable. The Risks of “Humility Behind the Mote”

      Creates the Illusion of Growth Without Real Change
      
          By acknowledging complexity but refusing to revise core positions, one can maintain the illusion of intellectual evolution while actually reinforcing prior beliefs.
      
      Reinforces Ideological Silos
      
          Peterson’s audience largely consists of those who already align with his worldview. His debates often serve to reaffirm his base rather than genuinely engage with alternative perspectives.
      
      Undermines Genuine Inquiry
      
          If public intellectuals prioritize rhetorical victories over truth-seeking, the broader discourse suffers. Intellectual engagement becomes performative rather than transformative.
      
      Encourages Polarization
      
          By appearing open while remaining rigid, thinkers like Peterson contribute to an intellectual landscape where ideological battle lines are drawn more firmly, rather than softened by genuine engagement.
      

      Conclusion: The Responsibility of Public Intellectuals

      Jordan B. Peterson is an undeniably influential thinker, and his emphasis on responsibility, order, and meaning resonates with many. However, his method of open-minded closed-mindedness serves as a cautionary tale. It demonstrates the power of intellectual posturing—how one can appear receptive while maintaining deep ideological resistance.

      For true intellectual growth, one must be willing not only to entertain opposing views but to risk being changed by them. Without that willingness, even the most articulate and thoughtful engagement remains, at its core, a well-defended fortress.

      So like I said, pure, evil AI slop, is evil, addictive and must be banned and lock up illegal gpu abusers and keep a gpu owners registry and keep track on those who would use them to abuse the shining light of our society, and who try to snuff them out like a bad level of luigi’s mansion

      source
      • ameancow@lemmy.world ⁨2⁩ ⁨months⁩ ago

        This was one of the posts of all time.

        source
        • -> View More Comments
      • bane_killgrind@slrpnk.net ⁨2⁩ ⁨months⁩ ago

        But Peterson is a fuckhead… So it’s accurate in this case. Afaik he does do the things it says.

        source
        • -> View More Comments
  • superglue@lemmy.dbzer0.com ⁨2⁩ ⁨months⁩ ago

    Of course. Relying on a lighter kills your ability to start a fire without one. Its nothing new.

    source
  • zipzoopaboop@lemmynsfw.com ⁨2⁩ ⁨months⁩ ago

    Critical thinking skills are what hold me back from relying on ai

    source
  • pineapplelover@lemm.ee ⁨2⁩ ⁨months⁩ ago

    Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won’t do that successfully. However, it can break down the regex and explain it to you.

    Ofc you all can say “just read the damn manual”, sure I could do that too, but asking an generative a.i to explain a script can also be as effective.

    source
    • vrighter@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

      yes, exactly. You lose your critical thinking skills

      source
    • Tangent5280@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Hey, just letting you know getting the answers you want after getting a whole lot of answers you dont want is pretty much how everyone learns.

      source
      • Nalivai@lemmy.world ⁨2⁩ ⁨months⁩ ago

        People generally don’t learn from an unreliable teacher.

        source
        • -> View More Comments
    • Minizarbi@jlai.lu ⁨2⁩ ⁨months⁩ ago

      regex101.com

      source
    • Xatolos@reddthat.com ⁨2⁩ ⁨months⁩ ago

      researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.

      It’s one thing to try to do and then ask for help (as you did), it’s another to just ask it to “do x” without thought or effort which is what the study is about.

      source
      • Petter1@lemm.ee ⁨2⁩ ⁨months⁩ ago

        So the study just checks how many people not yet learned how to properly use GenAI

        I think there exists a curve from not trusting to overtrusting than back to not blindly trusting outputs (because you suffered consequences from blindly trusting)

        And there will always be people blindly trusting bullshit, we have that longer than genAI. We have enough populists proving that you can tell many people just anything and they believe.

        source
    • foenkyfjutschah@programming.dev ⁨2⁩ ⁨months⁩ ago

      what got regex to do with critical thinking?

      source
  • lobut@lemmy.ca ⁨2⁩ ⁨months⁩ ago

    Remember the:

    Personal computers were “bicycles for the mind.”

    I guess with AI and social media it’s more like melting your mind or something. I can’t find another analogy. Like a baseball bat to your leg for the mind doesn’t roll off the tongue.

    I know Primeagen has turned off copilot because he said the “copilot pause” daunting and affects how he codes.

    source
    • dragonfucker@lemmy.nz ⁨2⁩ ⁨months⁩ ago

      Cars for the mind.

      Cars are killing people.

      source
  • OsrsNeedsF2P@lemmy.ml ⁨2⁩ ⁨months⁩ ago

    Really? I just asked ChatGPT and this is what it had to say:

    /s

    source
  • dill@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.

    It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.

    source
  • gramie@lemmy.ca ⁨2⁩ ⁨months⁩ ago

    I was talking to someone who does software development, and he described his experiments with AI for coding.

    He said that he was able to use it successfully and come to a solution that was elegant and appropriate.

    However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.

    source
  • Phoenicianpirate@lemm.ee ⁨2⁩ ⁨months⁩ ago

    The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.

    source
  • Jeffool@lemmy.world ⁨2⁩ ⁨months⁩ ago

    When it was new to me I tried ChatGPT out of curiosity, like with why tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. “Give me a list of 3 X” lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.

    I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.

    source
  • kitnaht@lemmy.world ⁨2⁩ ⁨months⁩ ago

    How many phone numbers do you know off of the top of your head?

    In the 90s, my mother could rattle off 20 or more.

    But they’re all in her phone now. Are luddites going to start abandoning phones because they’re losing the ability to remember phone numbers? No, of course not.

    Either way, these fancy prediction engines have better critical thinking skills than most of the flesh and bone people I meet every day to begin with. The world might actually be smarter on average if they didn’t open their mouths.

    source
  • thefartographer@lemm.ee ⁨2⁩ ⁨months⁩ ago

    Image

    source
  • SplashJackson@lemmy.ca ⁨2⁩ ⁨months⁩ ago

    Weren’t these assholes just gung-ho about forcing their shitty “AI” chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.

    source
  • arotrios@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.

    source
  • kratoz29@lemm.ee ⁨2⁩ ⁨months⁩ ago

    Is that it?

    One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

    Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis…

    As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

    source
  • ArchRecord@lemm.ee ⁨2⁩ ⁨months⁩ ago

    The only beneficial use I’ve had for “AI” (LLMs) has just been rewriting text, whether that be to re-explain a topic based on a source, or, for instance, sort and shorten/condense a list.

    Everything other than that has been completely incorrect, unreadably long, context-lacking slop.

    source
  • Sam_Bass@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Oddly enough that’s exactly what corporate wants. Mindless drones to do their bidding unquestioned

    source
  • jdeath@lemm.ee ⁨2⁩ ⁨months⁩ ago

    i use my thinking skills to tell the LLM to quit fucking up and try again or I’m gonna fire his ass

    source
  • underwire212@lemm.ee ⁨2⁩ ⁨months⁩ ago

    It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.

    source
-> View More Comments