Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline

⁨646⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨cypherpunks@lemmy.ml⁩ to ⁨technology@lemmy.world⁩

https://publichealthpolicyjournal.com/mit-study-finds-artificial-intelligence-use-reprograms-the-brain-leading-to-cognitive-decline/

source

Comments

Sort:hotnewtop
  • Wojwo@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

    Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.

    source
    • ALoafOfBread@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

      That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.

      source
      • sheogorath@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Fuck, this is why I’m feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can’t work on.

        With LLMs basically my job is still the same.

        source
      • rebelsimile@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        After being out of being a direct practitioner, I will say all my direct reports are “faster” in programs we use at work than I am, but I’m still waaaaaaaaaay more efficient than all of them (their inefficiencies drive me crazy actually), but I’ve also taken up a lot of development to keep my mind sharp. If I only had my team to manage and not my own personal projects, I could really see regressing a lot.

        source
    • vacuumflower@lemmy.sdf.org ⁨2⁩ ⁨weeks⁩ ago

      My dad around 1993 designed a cipher better than RC4 (I know it’s not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.

      My dad around 2003 still was intelligent enough, he’d explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.

      My dad around 2005 was promoted to a management position and was already becoming kinda dumber.

      My dad around 2010 was a fucking idiot, you’d think he’s mentally impaired.

      My dad around 2015 apparently went to a fortuneteller to “heal me from autism”.

      So yeah. I think it’s a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.

      source
    • sqgl@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

      That’s the Peter Principle.

      source
    • socphoenix@midwest.social ⁨3⁩ ⁨weeks⁩ ago

      I’d expect similar at least. When one doesn’t keep up to date on new information and lets their brain coast it atrophies like any other muscle would from disuse.

      source
    • TubularTittyFrog@lemmy.world ⁨2⁩ ⁨weeks⁩ ago
      [deleted]
      source
      • Wojwo@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

        Yeah, that’s part of it. But there is something more fundamental, it’s not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there’s a certain amount of brain rot that sets in.

        I had just attributed it to age, but this could also be a factor. I’m not sure it’s enough to warrant studies, but it’s interesting to me that just the act of managing work done by others could contribute to mental decline.

        source
  • DownToClown@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there’s a clear link between vaccines and autism.

    Neat.

    source
    • tad_lispy@europe.pub ⁨2⁩ ⁨weeks⁩ ago

      Thanks for the warning. Here’s the link to the original study, so we don’t have to drive traffic to that guys website.

      arxiv.org/abs/2506.0887

      I haven’t got time to read it and now I wonder if it was represented accurately in the article.

      source
      • codemankey@programming.dev ⁨2⁩ ⁨weeks⁩ ago

        That’s a math article

        source
        • -> View More Comments
    • cypherpunks@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      Thanks for pointing this out. Looking closer I see that this is not a publication I want to send traffic to, for a variety of reasons.

      I edited the post to link to MIT instead, and added a note in the post body explaining.

      source
    • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Public health flat earthers

      source
  • QuadDamage@kbin.earth ⁨3⁩ ⁨weeks⁩ ago

    Microsoft reported the same findings earlier this year, spooky to see a more academic institution report the same results.
    https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
    Abstract for those too lazy to read

    The rise of Generative AI (GenAI) in knowledge workflows raises
    questions about its impact on critical thinking skills and practices.
    We survey 319 knowledge workers to investigate 1) when and
    how they perceive the enaction of critical thinking when using
    GenAI, and 2) when and why GenAI affects their effort to do so.
    Participants shared 936 first-hand examples of using GenAI in work
    tasks. Quantitatively, when considering both task- and user-specific
    factors, a user’s task-specific self-confidence and confidence in
    GenAI are predictive of whether critical thinking is enacted and
    the effort of doing so in GenAI-assisted tasks. Specifically, higher
    confidence in GenAI is associated with less critical thinking, while
    higher self-confidence is associated with more critical thinking.
    Qualitatively, GenAI shifts the nature of critical thinking toward
    information verification, response integration, and task stewardship.
    Our insights reveal new design challenges and opportunities for
    developing GenAI tools for knowledge work.

    source
    • sqgl@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

      Why is it referring to GenAI? It doesn’t exist.

      source
      • felsiq@piefed.zip ⁨3⁩ ⁨weeks⁩ ago

        GenAI is short for generative AI in this context

        source
        • -> View More Comments
      • mushroommunk@lemmy.today ⁨3⁩ ⁨weeks⁩ ago

        I haven’t read the paper but they might mean “Generative AI”

        source
  • canadaduane@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

    I wonder what social media does.

    source
    • radix@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Image

      source
  • Imgonnatrythis@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    No wonder Republicans like it so much

    source
  • unpossum@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    So if someone else writes your essays for you, you don’t learn anything?

    source
  • Korkki@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

    You write essay with AI your learning suffers.

    One of these papers that are basically “water is wet, researches discover”.

    source
  • Ganbat@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

    But does it cause when when used exclusively for RP gooning sessions?

    source
    • svc@lemmy.frozeninferno.xyz ⁨3⁩ ⁨weeks⁩ ago

      Somebody fund this scholar’s research immediately

      source
      • FauxLiving@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I think we can get federal funding, let me run it by Director Big Balls

        source
    • masterofn001@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

      To date, after having gooned once (ongoing since September 2023), my core executive functions, my cognitive abilities and my behaviors have not suffered in the least. In fact, potato.

      source
  • Reygle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Image

    source
    • SCmSTR@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

      Isn’t that the same guy that plays Michael Bolton in Office Space?

      source
      • SCmSTR@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

        Image

        source
        • -> View More Comments
  • suddenlyme@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

    Its so disturbing. Especially the bit about your brain activity not returning to normal afterwards. They are teaching the kids to use it in elementary schools.

    source
    • hisao@ani.social ⁨2⁩ ⁨weeks⁩ ago

      I think they meant it doesn’t return to non-AI-user levels when you do the same task on your own immediately afterwards. But if you keep doing the task on your own for some time, I’d expect it to return to those levels rather fast. If not then research would have been titled something like “AI causes permanent brain damage”.

      source
      • xthexder@l.sw0.com ⁨2⁩ ⁨weeks⁩ ago

        That’s probably true, but it sure can be hard to motivate yourself to do things yourself when that AI dice roll is right there to give you an immediate dopamine hit. I’m starting to see things like vibecoding being as addictive as gambling. Personally I don’t use AI because I see all the subtle ways it’s wrong when programming, and the more I pay attention to things like AI search results, it seems like there’s almost always something misrepresented or subtly incorrect in the output, and for any topics I’m not already fluent in, I likely won’t notice these things until it’s already causing issues

        source
        • -> View More Comments
    • TubularTittyFrog@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      it’s not any different than eating fast/processed food vs eating healthy.

      it warps your expectations

      source
  • Hackworth@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    The MIT Study

    source
  • salty_chief@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I just asked ChatGPT if this is true. It told me no and to increase my usage of AI. So HA!

    source
  • FreedomAdvocate@lemmy.net.au ⁨2⁩ ⁨weeks⁩ ago

    What a ridiculous study. People who got AI to write their essay can’t remember quotes from their AI written essay? You don’t say?! Those same people also didn’t feel much pride over their essay that they didn’t write? Hold the phone!!! Groundbreaking!!!

    Academics are a joke these days.

    source
    • FauxLiving@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I see you skipped that part of academia where they taught that, in science, there are steps between hypothesis and conclusion even if you already think you know the answer.

      source
      • manefraim@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Or one could entirely skip the part where they read the study beyond the headline.

        source
        • -> View More Comments
  • lechekaflan@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    cognitive decline.

    Another reason for refusing those so-called tools… it could turn one into another tool.

    source
    • drspawndisaster@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      More like it would cause you to need the tool in order to be the tool that you are already mandated to be.

      source
    • surph_ninja@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      It’s a clickbait title. Using AI doesn’t actually cause cognitive decline. They’re saying using AI isn’t as engaging for your brain as the manual work, and then broadly linking that to the widely understood concept that you need to engage your brain to stay sharp. Not exactly groundbreaking.

      source
      • mika_mika@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Sir this is Lemmy & I’m afraid I have to downvote you for defending AI which is always bad. /s

        source
  • sudo_shinespark@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    Heyyy, now I get to enjoy some copium for being such a dinosaur and resisting to use it as often as I can

    source
    • morto@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      You're not a dinosaur. Making people feel old and out of the trend is exactly one of the strategies used by big techs to shove their stuff into people.

      source
      • TubularTittyFrog@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        bingo.

        it’s like a health supplement company telling you eating healthy is stupid when they have this powder/pill you should take.

        source
  • terminhell@lemmy.world ⁨3⁩ ⁨weeks⁩ ago
    [deleted]
    source
    • frongt@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

      This isn’t just being said, it’s being shown with data.

      source
      • sidelove@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Not only that but *broad gestures at society and the state of the world post-Internet*

        source
        • -> View More Comments
    • givesomefucks@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      That’s the thing about cognitive decline…

      The people experiencing it only realize it’s happening during brief reprieves from the symptoms

      So if someone is experiencing cognitive decline, they’re literally incapable of recognizing it. They all think they completely fine…

      source
      • meco03211@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        A constant refrain I’ve found myself using with a Facebook “friend” is “you lack the ability to even understand why you are wrong”. Like I’m convinced he actually thinks anecdotal stories carry as much weight as troves of data proving him wrong.

        source
      • Angelusz@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        I bet you think you’re totally fine… ;)

        source
    • QuadDamage@kbin.earth ⁨3⁩ ⁨weeks⁩ ago

      The people the paper talks about are the masses who think LLMs are "intelligent", then outsource their frontal lobe to Silicon Valley datacenters because it's seemingly easier. People who see LLMs as tools are much less (if at all) affected by this, if anything it's a trap for people who already have lower critical thinking skills in the first place and want GPUs to think for them.

      source
    • the_q@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

      You don’t think it’s odd that you use AI and here you are defending it?

      source
      • Ganbat@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

        Realistically speaking, why would anyone think it’s odd to defend something they use and/or enjoy? That doesn’t really point to anything abnormal.

        source
        • -> View More Comments
      • Angelusz@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        It is not. As with all things, LLMs have their use. Unfortunately, they are slightly overhyped and the tech is very resource hungry, contributing to environmental and societal problems in at least the USA, probably everywhere to at least some extent.

        The hope is, ofcourse, that the same tech will help alleviate those problems in turn. Time will tell who’s right.

        source
        • -> View More Comments
      • QuadDamage@kbin.earth ⁨3⁩ ⁨weeks⁩ ago

        ...you are in a technology community? They're barely defending anything either, just a reasonable take about people saying the same thing about earlier technologies.

        source
      • Feathercrown@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Clearly their mind has been taken over by the machine /s

        Baffling comment tbh

        source
  • surph_ninja@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    And using a calculator isn’t as engaging for your brain as manually working the problem. What’s your point?

    source
    • UnderpantsWeevil@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Seems like you’ve made the point succinctly.

      Don’t lean on a calculator if you want to develop your math skills. Don’t lean on an AI if you want to develop general cognition.

      source
      • 5C5C5C@programming.dev ⁨2⁩ ⁨weeks⁩ ago

        I don’t think this is a fair comparison because arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics. Any human that doesn’t have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.

        The really useful aspects of math are things like how to think quantitatively. How to formulate a problem mathematically. How to manipulate mathematical expressions in order to reach a solution. For the most part these are not things that calculators do for you. In some cases reaching for a calculator may actually be a distraction from making real progress on the problem. In other cases calculators can be a useful tool for learning and building your intuition - graphing calculators are especially useful for this.

        The difference with LLMs is that we are being led to believe that LLMs are sufficient to solve your problems for you, from start to finish. In the past students who develop a reflex to reach for a calculator when they don’t know how to solve a problem were thwarted by the fact that the calculator won’t actually solve it for them. Nowadays students develop that reflex and reach for an LLM instead, and now they can walk away with the belief that the LLM is really solving their problems, which creates both a dependency and a misunderstanding of what LLMs are really suited to do for them.

        I’d be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks. That might also help mitigate the fact that LLMs don’t reliably know the answers: if the user is presented with a leading question instead of an answer then they’re still left with the responsibility of investigating and validating.

        But that doesn’t leave users with a sense of immediate gratification which makes it less marketable and therefore less opportunity to profit…

        source
        • -> View More Comments
      • BananaIsABerry@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        Don’t lean on an AI if you want to develop general cognition essay writing skills.

        Sorry the study only examined the ability to respond to SAT writing prompts, not general cognitive abilities. Further, they showed that the ones who used an AI just went back to “normal” levels of ability when they had to write it on their own.

        source
        • -> View More Comments
    • ayyy@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      It’s important to know these things as fact instead of vibes and hunched.

      source
      • surph_ninja@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Sure, and it’s important to know how to perform math functions without a calculator. But once you learn it, and move on to something more advanced or day-to-day work, you use the calculator.

        source
        • -> View More Comments
    • rumba@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      Yeah, I went over there with ideas that it was grandiose and not peer-reviewed. Turns out it’s just a cherry-picked title.

      If you use an AI assistant to write a paper, you don’t learn any more from the process than you do from reading someone else’s paper. You don’t think about it deeply and come up with your own points and principles. It’s pretty straightforward.

      But just like calculators, once you understand the underlying math, unless math is your thing, you don’t generally go back and do it all by hand because it’s a waste of time.

      At some point, we’ll need to stop using long-form papers to gauge someone’s acumen in a particular subject. I suspect you’ll be given questions in real time and need to respond to them on video with your best guesses to prove you’re not just reading it from a prompt.

      source
    • Randomgal@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      You better not read audiobooks or learn form videos either. That’s pure brianrot. Too easy.

      source
      • surph_ninja@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Look at this lazy fucker learning trig from someone else, instead of creating it from scratch!

        source
        • -> View More Comments
  • Blackmist@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

    Anyone who doubts this should ask their parents how many phone numbers they used to remember.

    In a few years there’ll be people who’ve forgotten how to have a conversation.

    source
    • zqps@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      I don’t see how that’s any indicator of cognitive decline.

      Also people had notebooks for ages. The reason they remembered phone numbers wasn’t necessity, but that you had to manually dial them every time.

      source
      • NateNate60@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.

        —a story told by Socrates, according to his student Plato

        source
    • starman2112@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      The other day I saw someone ask ChatGPT how long it would take to perform 1.5 million instances of a given task, if each instance took one minute. Mfs cannot even divide 1.5 million minutes by 60 to get get 25,000 hours, then by 24 to get 1,041 days. Pretty soon these people will be incapable of writing a full sentence without ChatGPT’s input

      source
      • pirat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I want a free cookie emoji!

        I didn’t ask an LLM, no, I asked Wikipedia:

        The mean month-length in the Gregorian calendar is 30.436875 days.

        So,

        1041 ÷ 30.436875 ≈ 34 months and…

        0.2019343313 × 30.436875 ≈ 6 days and…

        0.146249999987 × 24 ≈ 3 hours and…

        0.509999999688 × 60 ≈ 30 minutes and…

        0.59999998128 × 60 ≈ 35 seconds and…

        0.9999988768 × 1000 ≈ 999 milliseconds and

        0.9999988768 × 1000000 ≈ 999999 nanoseconds

        34m 6d 3h 30m 35s 999ms 999999 ns

        Or we could just say 36s…

        source
        • -> View More Comments
      • lennivelkant@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

        Rough estimate using 30 days as average month would be ~35 months (1050 = 35×30). The average month is a tad longer than 30 days, but I don’t know exactly how much. Without a calculator, I’d guess the total result is closer to 34.5. Just using my own brain, this is as far as I get.

        Now, adding a calculator to my toolset, the average month is 365.2425 d / 12 m = 30.4377 d/m. The total result comes out to about 34.2, so I overestimated a little.

        Also, the total time is 1041.66… which would be more correctly rounded to 1042, but has negligible impact on the redult.

        source
        • -> View More Comments
      • olympicyes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I swear the companies hard code solutions for weird edge cases so their investors are followed into believing that their LLMs are getting smarter.

        source
      • pirat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        You forgot doing the years, which is a bit trickier if we take into account the leap years.

        According to the Gregorian calendar, every fourth year is a leap year unless it’s divisible by 100 – except those divisible by 400 which are leap years anyway. Hence, the average length of one year (over 400 years) must be:

        365 + 1⁄4 − 1⁄100 + 1⁄400 = 365.2425 days

        So,

        1041 / 365.2425 ≈ 2.85 years

        Or 2 years and…

        0.850161194275 × 365.2425 ≈ 310 days and…

        0.514999999987 × 24 ≈ 12 hours and…

        0.359999999688 × 60 ≈ 21 minutes and…

        0.59999998128 × 60 ≈ 36 seconds

        1041 days is just about 2y 310d 12h 21m 36s

        Wtf, how did we go from 1041 whole days to fractions of a day? Damn leap years!

        Had we not been accounting for them, we would have had 2 years and…

        0.852054794521 × 365 = 311.000000000165 days

        Or simply 2y 311d if we just ignore that tiny rounding error or use fewer decimals.

        source
        • -> View More Comments
    • TubularTittyFrog@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I already have seen a massive decline personally and observationally (watching other people) in conversation skills.

      Most people now to talk to each other like they are exchanging internet comments. They don’t ask questions, they don’t really engage… they just exchange declaratory sentences.

      Most of our new employees the past year or two really struggle with any verbal communication and if you approach them physically to converse about something they emailed about they look massively uncomfortable and don’t really know how to think on their feet.

      source
    • Psythik@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      People don’t memorize phone numbers anymore? Why not? Dialing is so much quicker than searching your contacts for the right person.

      source
      • UntitledQuitting@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

        This is the furthest thing from my experience lol I can type 2 letters in my phone, see the right name and press call. I haven’t memorised a phone number since before the year 2000

        source
  • Tracaine@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I don’t refute the findings but I would like to mention: without AI, I wasn’t going to be writing anything at all. I’d have let it go and dealt with the consequences. This way at least I’m doing something rather than nothing.

    I’m not advocating for academic dishonesty of course, I’m only saying it doesn’t look like they bothered to look at the issue from the angle of:

    “What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?”

    source
  • MourningDove@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    relying on AI makes people stupid?

    Who knew?

    source
  • trashgarbage78@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    what should we do then? just abandon LLM use entirely or use it in moderation? i find it useful to ask trivial questions and sort of as a replacement for wikipedia. also what should we do to the people who are developing this ‘rat poison’ and feeding it to young people’s brains?

    source
  • BussyGyatt@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.

    Better late than never. Good catch.

    source
  • Yoshi@futurology.today ⁨2⁩ ⁨weeks⁩ ago

    Thank you for providing a better Source and editing the post!

    source
  • veebee@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    I mean, that’s not surprising.

    source
  • theneverfox@pawb.social ⁨2⁩ ⁨weeks⁩ ago

    Ok, if the ai knows

    source
  • eletes@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Been vibe coding hard for a new project this past week. It’s been working really well but I feel like I watched a bunch of TV. Like it’s passive enough like I’m flipping through channel, paying a little attention and then going to the next.

    Where as coding it myself would engage my brain and it might feel like reading.

    It’s bizarre because I’ve never had this experience before.

    source
  • LeoshenkuoDaSimpli@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Are history teachers wasting their time?

    source
  • simplejack@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Don’t worry scro

    source