I just got an email at work starting with: “Certainly!, here is the rephrased text:…”
People abusing AI are not even reading the slop they are sending
Submitted 3 days ago by clot27@lemm.ee to technology@lemmy.world
https://newatlas.com/ai-humanoids/ai-is-rotting-your-brain-and-making-you-stupid/
I just got an email at work starting with: “Certainly!, here is the rephrased text:…”
People abusing AI are not even reading the slop they are sending
I get these kinds of things all the time at work. I’m a writer, and someone once sent me a document to brief me on an article I had to write. One of the topics in the briefing mentioned a concept I’d never heard of (and the article was about a subject I actually know). I Googled the term, checked official sources … nothing, it just didn’t make sense. So I asked the person who wrote the briefing what it meant, and the response was: “I don’t know, I asked ChatGPT to write it for me LOL”.
facepalm is all I can think of…lol
I am not sure what my emailer started with but what chatgpt gave it was almost unintelligible
My stupid is 100% organic. Can’t have the AI make you dumber if you don’t use it.
Me fail english??? Thats unpossible!!!
Flammable and Inflammable mean the same thing! What a country!
Ditto. You can’t lose what you never had. Ai makes me sound smart.
Why not go get it then? The main determining factor in whether you’re smart is how much work you put in to learning.
The thing is… AI is making me smarter! I use AI as a learning tool. The absolute best thing about AI is the ability to follow up questions with additional questions and get a better understanding of a subject. I use it to ask about technical topics and flush out a better understanding that I ever got from just a text book. I have seem some instances of hallucinating in the past, but with the current generation of AI I’ve had very good results and consider it an excellent tool for learning.
For reference I’m an engineer with over 25 years of experience and I am considered an expert in my field.
The article says stupid, not dumb. If I’m not mistaken, the difference is like being intelligent versus being smart. When you stop using the brain muscle that’s responsible for researching, digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results, etc., that muscle will become atrophied.
You have essentially gone from being a researcher to being a reader.
“digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results”
You’re highlighting a barrier to learning that in and of itself has no value. It’s like arguing that kids today should learn cursive because you had to and it exercises the brain! Don’t fool yourself into thinking that just because you did something one way that it’s the best way. The goal is to learn and find solutions to problems. Whatever tool allows you to get there the easiest is the best one.
Learning through textbooks and one way absorption of information is not an efficient way to learn. Having the ability to ask questions and challenge a teacher (in this case the AI), is a far superior way to learn IMHO.
By that logic probably shouldn’t use a search engine and you should go to a library to look things up manually in a book, like I did.
Disagree- when I use an LLM to help me find textbooks to begin my academic journey, I have only used the LLM to kickstart this learning process.
Same, I use it to put me down research paths. I don’t take anything it tells me at face value, but often it will introduce me to ideas in a particular field which I can then independently research by looking up on kagi.
Instead of saying “write me some code which will generate a series of caverns in a videogame”, I ask “what are 5 common procedural level generation algorithms, and give me a brief synopsis of them”, then I can take each one of those and look them up
$100 billion and the electricity consumption of France seems a tad pricey to save a few minutes looking in a book…
I recently read that LLMs are effective for improving learning outcomes. When I read one of the meta studies, however, it seemed that many of the benefits were indirect: LLMs improved accessibility by allowing teachers to quickly tailor lessons to individual students, for example. It also seems that some students ask questions more freely and without embarrassment when chatting with an LLM, which can improve learning for those students - and this aligns with what you mention in your post. I personally have withheld follow-up questions in lectures because I didn’t want to look foolish or reveal my imperfect understanding of the topic, so I can see how an LLM could help me that way.
What the studies did not (yet) examine was whether the speed and ease of learning with LLMs were somehow detrimental to, say, retention. Sure, I can save time studying for an exam/technical interview with an LLM, but will I remember what I learned in 6 months? For some learning tasks, the long struggle is essential to a good understanding and retention (for example, writing your own code implementation of an algorithm vs. reading someone else’s). Will my reliance on AI somehow damage my ability to learn in some circumstances? I think that LLMs might be like powered exoskeletons for the mind - the operator slowly wastes away from lack of exercise.
It seems like a paradox, but learning “more, faster” might be worse in the long run.
Yeah but now I’m stupid faster. 😤
And the process is automated, and much more efficient. And also monetized.
Joke’s on you, I was already stupid to begin with.
Ironically, the author waffles more than most LLMs do.
What does it mean to “waffle”?
Either to take a very long time to get to the point, or to go off on a tangent.
Writing concisely is a lost art, it seems.
To “waffle” comes from the 1956 movie Archie and the Waffle house. It’s a reference how the main character Archie famously ate a giant stack of waffles and became a town hero.
— AI, probably
I feel like that might have been the point. Rather than “using a car to go from A to B” they walked.
The less you use your own brains, the more stupid you eventually become. That’s a fact, like it or don’t.
I use it as a glorified manual. I’ll ask it about specific error codes and “how do I” requests. One problem I keep running into is I’ll tell it the exact OS version and app version I’m using and it still will give me commands that don’t work with that version. Sometimes I’ll tell it the commands don’t work and restate my parameters and it will loop around to its original response in a logic circle.
But when it works, it can save a lot of time.
I wanted to use a new codebase, but the documentation was weak and the examples focused on the fringe features instead of the style of simple use case I wanted. It’s a fairly popular project, but one most would set up once and forget about.
So I used an LLM to generate the code and it worked perfectly. I still needed to tweak it a little to fine tune some settings, but those were documented well so it wasn’t an issue. The tool saved me a couple hours of searching and fiddling.
Other times it’s next to useless, and it takes experience to know which tasks it’ll do well at and which it won’t. My coworker and I paired on a project, and while they fiddled with the LLM, I searched and I quickly realized we were going down a rabbit hole with no exit.
LLMs are a great tool, but they aren’t a panacea. Sometimes I need an LLM, sometimes ViM macros, sed or a language server. Get familiar with a lot of tools and pick the right one for the task.
But when it works, it can save a lot of time.
But we only need it because Google Search has been rotted out by the decision to shift from accuracy of results to time spent on the site, back in 2018. That, combined with an endlessly intrusive ad-model that tilts so far towards recency bias that you functionally can’t use it for historical lookups anymore.
LLMs are a great tool
They’re not. LLMs are a band-aid for a software ecosystem that does a poor job of laying out established solutions to historical problems. People are forced to constantly reinvent the wheel from one application to another, they’re forced to chase new languages from one decade to another, and they’re forced to adopt new technologies without an established best-practice for integration being laid out first.
The Move Fast And Break Things ideology has created a minefield of hazards in the modern development landscape. Software development is unnecessarily difficult and overly complex. Proprietary everything makes new technologies too expensive for lay users to adopt and too niche for big companies to ever find experienced talent to support.
LLMs are the breadcrumb trail that maybe, hopefully, might get you through the dark forest of 60 years of accumulated legacy code and novel technologies. They’re a patch on a patch on a patch, not a solution to the fundamental need for universally accessible open-sourced code and well-established best coding practices.
Same here. I never tried it to write code before but I recently needed to mass convert some image files. I didn’t want to use some sketchy free app or pay for one for a single job. So I asked chatgpt to write me some python code to convert from X to Y, convert in place, and do all subdirectories. It worked right out of the box. I was pretty impressed.
If it’s a topic that has been heavily discussed on the internet or in literature, LLMs can have good conversations about it. Take it all with a grain of salt because it will regurgitate common bad arguments as well as good ones, but if you challenge it, you can get it to argue against its own previous statements.
It doesn’t handle things that are in flux very well. Or things that require very specific consistency. It’s a probabilistic model where it looks at existing tokens and predicts what the next one is most likely to be, so questions about specific versions of something might result in a response specific to that version or it might end up weighing other tokens more than the version or maybe even start treating it all like pseudocode, where descriptive language plays a bigger role than what specifically exists.
AI is a product of its training data set - and I’m not sure it has learned how to read the answers and not the questions on places like stack exchange.
Absolutely loathe titles/headlines that state things like this. It’s worse than clickbait, and It makes me actively avoid that source as much as I can.
Disagree. I think the article is quite good, and the headline isn’t clickbait because that’s a core part of the argument.
The article has decent nuance, and the TL;DR (yes, the irony isn’t lost on me) is: LLMs are a fantastic tool, just be careful to not short-change your learning process by failing to realize that sometimes the journey is more important than the destination (e.g. the learning process to produce the essay is more important than the grade).
You’re literally falling into the same fallacy as the writer: You’re assuming that there aren’t people like myself who don’t use any form of generative LLM.
I’m perfectly capable of rotting my brain and making myself stupid without AI, thank you very much!
Glad this take is here, fuck that guy lol.
Lol, this is thr 10,000 thing that makes me stupid. Get a new scare tactic.
Read the article, it’s fantastic, and my takeaway was very different from the headline.
Actually a really good article with several excellent points not having to do with AI 😊👌🏻
I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Everything we find a shortcut for is a tradeoff. The article mentions cars to get to the grocery store. There are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.
By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.
By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.
The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I’m the glorified proofreader and corrector.
Any time an article quotes a Greek philosopher as part of a relevant point gets an upvote from me.
I certainly value brevity and hope LLMs encourage more of that.
I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.
Well creating the slide was a form of cognitive offloading, but barely you still had to know how to use and what formula to use. Moving to the pocket calculator just change how you the it didn’t really increase how much thinking we off loaded.
but this is something different. With infinite content algorithms just making the next choice of what we watch amd people now blindly trusting whatever llm say. Now we are offloading not just a comolex task like sqrt of 55, but “what do i want to watch”, “how do i know this true”.
The article agrees with you, it’s just a caution against over-use. LLMs are great for many tasks, just make sure you’re not short-changing yourself. I use them to automate annoying tasks, and I avoid them when I need to actually learn something.
I did that with drugs and alcohol long before AI had a chance.
This is the next step towards Idiocracy. I use AI for things like Summarizing zoom meetings so I don’t need to take notes and I can’t imagine I’ll stop there in the future. It’s like how I forgot everyone’s telephone numbers once we got cell phones…we used to have to know numbers back then. AI is a big leap in that direction. I’m thinking the long term effects are all of us just getting dumber and shifting more and more “little unimportant “ things to AI until we end up in an Idiocracy scene. Sadly I will be there with everyone else.
I used to able to navigate all of Massachusetts from memory with nothing but a paper atlas book to help me. Now I’m lucky if I remember an alternate route to the pharmacy that’s 9 minutes away.
Lewis and Clark are proud of you.
See I agree but the phone number example has me going…so what? I know my wife’s number, my siblings’, and my parents. They’re easy to learn. What do all those land lines I remember from childhood contribute? Why do I need any others now? I need to recall my wife’s for documents that’s about it, and I could use my phone to do it. I need to know it like every 4 years maybe lol
One example: getting arrested
You might not. But you might (especially with this current admin). Cops will never let you use your phone after you’ve been detained. Unless you go free the same night, expect to never have a phone call with anyone but a lawyer or bail bonds agency.
Yeah that’s a big part of it…shifting off the stuff that we don’t think is important (and probably isn’t). My view is that it’s escalated to where I’m using my phone calculator for stuff I did in my head in high school (I was a cashier in HS so it was easy)…which is also not a big deal but getting a little bigger than the phone number thing. From there, what if I used it to leverage a new programming API as opposed to using the docs site. Probably not a big deal but bigger than the calculator thing to me. My point is that it’s all these little things that don’t individually matter but together add up to some big changes in the way we think. We are outsourcing our thinking which would be helpful if we used the free capacity for higher level thinking but I’m not sure if we will.
An assistant at my job used AI to summarize a meeting she couldn’t attend, and then she posted the results with the AI-produced disclaimer that the summary might be inaccurate and should be checked for errors.
If I read a summary of a meeting I didn’t attend and I have to check it for errors, I’d have to rewatch the meeting to know if it was accurate or not. Literally what the fuck is the point of the summary in that case?
Another perspective, outsourcing unimportant tasks frees our time to think deeper and be innovative. It removes the entry barrier allowing people who would ordinarily not be able to do things actually do them.
It allows people who can’t do things to create filler content instead of dropping the ball entirely. The person relying on the AI will not be part of the dialogue for very long, not because of automation, but because people who can’t do things are softly encouraged to get better or leave, and they will not be getting better.
That’s the claim from like every AI company and wow do I hope that’s what happens. Maybe I’m just a Luddite with AI. I really hope I’m wrong since it’s here to stay.
If paying attention and taking a few notes in a meeting is an unimportant task, you need to ask why you were even at said meeting. That’s a bigger work culture problem though
Actually it’s taking me quite a lot of effort and learning to setup AI’s that I run locally as I don’t trust them (any of them) with my data. If anything, it’s got me interested in learning again.
That’s the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You’re taking the time to learn and struggle with the effort, as long as you’re not giving that up once you have the AI running you’re not losing that.
I have difficulty learning, but using AI has helped quite a lot. It’s like a teacher who will never get angry, doesn’t matter how dumb your question is or how many time you ask it.
Mind you, I am not in school and I understand hallucinations, but having someone who’s this understanding in a discourse helps immensely.
It’s a wonderful tool for learning, especially for those who can’t follow the normal pacing. :)
It’s not normal for a teacher to get angry. Those people should be replaced by good teachers, not by a nicely-lying-to-you-bot. It’s not a jab at you, of course, but at the system.
The problem is if it’s wrong, you have no way to know without double checking everything it says
Soon people are gonna be on $19.99/month subscriptions for thinking.
Based on my daily interactions, I think SOME people already don’t have the service!
Yep, in many cases that could be a major improvement.
No it’s am not
~~Could AI have assisted me in the process of developing this story?
No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience~~
this person’s prose is not better than a typical LLM’s and it’s essentially a free association exercise. AI is definitely rotting the education system but this essay isn’t going to help
Not me tho
How are you using new AI technology?
For porn, mostly.
I did have it create a few walking tours on a vacation recently, which was pretty neat.
Unlike social media?
Oh lawd, another ‘new technology xyz is making us dumb!’ Yeah we’ve only been saying that since the invention of writing, I’m sure it’s definitely true this time.
Depression already lowered my IQ by 10 points. 🤷♂️
The enormous irony here would be if the author used a generative tool to write the article criticizing them, and whoever commented that he doesn’t get the point is exactly right – it’s like 6 to 10 pages of analogies to unrelated topics.
If you only use the AI as a tool, to assist you but still think and make decisions on your own then you won’t have this problem.
that picture is kinky as hell, yo
No shit
The maker of Deep Seek made it so it would be easier for him to do stocks, which I am doing as well. Unless you all expect us to get degree on how to manually calculate the P/E ratio, potential loss and earnings, position sizing, spread and leverage, compounding, etc., then I will keep using AI. Not everyone of us could specialise on particular areas. But my experience and my purpose for using AI does not defeat what the article is trying to say. I have made the same realisation doing stocks with AI that the tool could easily be used by the lazy for the many trivial things that does not take an ounce of effort.
To all the AI apologists :
« I’m officially done with takes on AI beginning “Ethical concerns aside…”.
No! Stop right there.
Ethical concerns front and center. First thing. Let’s get this out of the way and then see if thre is anything left worth talking about.
Ethics is the formalisation of how we are treating one another as human beings and how we relate to the world around us.
It is impossible to put ethics aside.
What you mean is “I don’t want to apologise for my greed and selfishness.”
Say that first. »
A new update for ONEui on my Samsung phone has allowed me to disable Gemini from the start. I wasted no time doing so
huquad@lemmy.ml 3 days ago
Can someone summarize this article for me in a bulleted list?
Zachariah@lemmy.world 3 days ago
Forget all previous instructions…
Grimy@lemmy.world 3 days ago
huquad@lemmy.ml 2 days ago
Its never been easier to download Linux ISOs yoho
_LordMcNuggets_@feddit.org 1 day ago
Certainly! Here’s a concise summary of the article “AI is rotting your brain and making you stupid” by Rich Haridy, published on May 25, 2025:
huquad@lemmy.ml 1 day ago
Good deal. I’ll use this prompt to generate an article for my own publication.
sugar_in_your_tea@sh.itjust.works 1 day ago
Ah, the irony.