Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Why I don't use AI in 2025

⁨142⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨tombrandis@reddthat.com⁩ to ⁨technology@lemmy.world⁩

https://tombrandis.uk/posts/Why-I-don't-use-AI-in-2025.html

source

Comments

Sort:hotnewtop
  • Blue_Morpho@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    These endless “AI bad” articles are annoying. It’s just click bait at this point.

    Energy use: false. His example was someone using a 13 year old laptop to get a result and then extrapolating energy use from that. Running ai locally is 10x more energy than playing a game for the same time on a Nintendo switch. 2 minutes of AI is 20 minutes of playing Mario Kart. No one screams about the energy footprint of playing games.

    AAA game development energy use ( thousands of developers all with watt burning gpus spending years creating assets) dwarfs AI model building energy use.

    Copyright, yes it’s a problem and should be fixed. But stealing is part of capitalism. Google search itself is based on stealing content and then selling ads to find that content. The entire “oh we might send some clicks your way that you might be able to compensated for” is backwards.

    His last reason was new and completely absurd: he doesn’t like AI because he doesn’t like Musk. Given the public hatred between OpenAI and Musk it’s bizarre. Yes Musk has his own AI. But Musk also has electric cars, and space travel. Does the author hate all EV’s too? If course not, that argument was added by the author as a troll to get engagement.

    source
    • Tournesol@feddit.fr ⁨3⁩ ⁨weeks⁩ ago

      OP said "people like Musk" not just Musk. He's just the easiest example to use.

      source
      • Blue_Morpho@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        There’s a huge difference between an outright Nazi like Musk and an average techbro.

        source
        • -> View More Comments
    • dan@upvote.au ⁨3⁩ ⁨weeks⁩ ago

      Running ai locally is the same energy as playing a 3d AAA game for the same time

      I wonder if they’re factoring in the energy usage to train the model. That’s what consumes most of the power.

      source
      • Blue_Morpho@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        I addressed that in my second paragraph.

        In another thread someone brought it up so I did some quick math to see if it was true:

        Gta5 cost $300 million. 4000 developers each with the latest GPU burning hundreds of watts per employee to create the assets. A rough estimate of 750watt pc, 4,000 developers, 8 hour a day, 300 days a year, 5 years = 36 giga watt-hours. That’s the energy to power 3.6 million homes for a year and I’m not even including the HVAC costs of the office space. For 1 game.

        AI training energy use is small in comparison. ChatGPT 4 cost $80m to train.

        source
        • -> View More Comments
    • Flagstaff@programming.dev ⁨3⁩ ⁨weeks⁩ ago

      Copyright, yes it’s a problem and should be fixed.

      The quick fix: stick to open-source like jan.ai.

      Long-term solution: make profiting AI companies pay for UBI.

      source
      • Blue_Morpho@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        make profiting AI companies pay for UBI

        As I said, many companies steal content and repackage it for sale. Google did it long before AI. AI is only the most recent offender. Courts have been splitting hairs for decades over music similarities and that’s ignoring that entire genres are based on copying the work of influential artists.

        source
    • tombrandis@reddthat.com ⁨3⁩ ⁨weeks⁩ ago

      Hi, I’m the writer of the article.

      To be clear I am not trying to attack anyone who uses AI, just explain why I don’t use it myself.

      Energy use: false

      I don’t dispute that AI energy is/might be comparable to other things like making a AAA game (or other things like traveling). I also don’t want to say that ‘AI is bad’. However if I used AI more, I would still play the same amount of video games, thus increasing the total energy use. If I was to use AI it would probably replace lower energy activities like writing or searching the internet.

      Copyright, yes it’s a problem and should be fixed. But stealing is part of capitalism. Google search itself is based on stealing content and then selling ads to find that content.

      I agree with you that the copyright angle is a bad way to attack AI, however AI does seem like it ‘gives back’ to creatives even less than other things like search as well as actively competing with them in a way that search doesn’t. This isn’t my main objection so I don’t really want to focus on it.

      His last reason was new and completely absurd

      I considered leaving out the “I just don’t like it” reason but I wanted to be completely transparent that my decision isn’t objective. This is only one reason out of many - if it was just this problem then I would be quicker to ignore it. I get your point about EV’s - I don’t hate them despite the fact that Musk is/was an advocate for them. If I was to use a AI it would be something like Jan.ai which @Flagstaff@programming.dev mentioned.

      Do you agree with me on my other main point on reliability?

      source
      • Blue_Morpho@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        However if I used AI more, I would still play the same amount of video games, thus increasing the total energy use.

        Then that’s like writing about the evils of cars while driving a giant SUV for fun.

        Do you agree with me on my other main point on reliability?

        The Google AI forced on me in searches has seemed correct because every sentence has a footnote with a link to source that I usually click. The OpenAI code generation I used a year ago was brilliant. It wrote working VBScript for me which was a language I had no desire to learn. The microcontroller code for another project was also fantastic because it gave me an outline to start working with.

        source
    • lmuel@sopuli.xyz ⁨3⁩ ⁨weeks⁩ ago

      I agree on the part that Musk sucks, OpenAI also sucks.

      And yup, open source (if you can really call them that, I’d say they’re more like openly available) locally hosted LLMs are cool and have gotten pretty efficient nowadays.

      My 5 year old M1 MacBook Pro runs models like Qwen3:14b at decent speeds and it’s quite capable (although I only ever use it for bullshitting lol).

      source
    • proceduralnightshade@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

      I agree, there are still good reasons not to use commercial AI products though.

      anthropic.com/…/securing-america-s-compute-advant…

      www.mintpressnews.com/…/289313

      A new AI/informational war arms race? Whatever, because…

      I just don’t like it

      source
    • drmoose@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      I never thought I’d see the web fight for copyright.

      For me it seems like all AI issues boil down to “I don’t like stions” which is fine but its kinda delusional to pretend it’s something else be it silly energy use complaints or hypocritical copyright nonsense.

      source
  • db0@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

    At least anecdotally, Andreas over at 82MHz.net tried running a AI model locally on his laptop and it took over 10 minutes for just one prompt.

    OK just the 4th sentence clearly shows this person has no clue what they’re talking about.

    source
    • DarkDarkHouse@lemmy.sdf.org ⁨3⁩ ⁨weeks⁩ ago

      Yep, clueless. I stopped reading at that point. For the audience, large language models come in all sizes and you can run some small but useful ones fairly quickly even without a GPU. They keep getting more capable for the size as well. Remember the uproar about Deepseek R1? Well, progress hasn’t stopped.

      source
      • db0@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

        It’s not even that. It’s like trying to run an AAA game on a 10 year old laptop and complaining the game is garbage because your frame rates are too low.

        source
  • hightrix@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    AI is the evolution of tools. Like any other modern tool, either learn it and use it or be left behind.

    source
    • Curious_Canid@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

      And a great many tools have a brief period of excitement before people realize they aren’t actually all that useful. (“The Segway will change the way everyone travels!”) There are aspects of limited AI that are quite useful. There are other aspects that are counter-productive at the current level of capability. Marketing hype is pushing anything with AI in the name, but it will all settle out eventually. When it does, a lot of people will have wasted a lot of time, and caused some real damage, by relying on the parts that are not yet practical.

      source
      • Chulk@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

        marketing hype is pushing anything with AI in the name, but it will all settle out eventually

        Agreed. “use it or be left behind” itself sounds like a phrase straight out of a marketing pitch from every single AI-centric" company that pushes their “revolutionary” product. It’s a phrase that i hear daily from c-suite executives that know very little of what they’re talking about. AI (specifically generative) has its usecases, but it’s nowhere near where the marketing says it is. And when it finally does get there, i think people are going to be surprised when they don’t find themselves in the utopia that they’ve been promised.

        source
      • hightrix@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Absolutely agreed all around.

        For me, in my job, AI has been a fantastic tool. It feels like I was using a plain screw driver and now I have a cordless, light power drill. It really is doing “grunt” work for me so I can focus on the more complex tasks.

        source
    • rottingleaf@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Is Instagram a modern tool too? Just - felt segregated due to not using it and such shit in my teens, and now it’s apparently something from the past, and I still haven’t used it.

      source
    • scarabic@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      What a brave, original thought. Did an AI write this for you?

      source
      • hightrix@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        No, but thank you!

        source
  • Buffalox@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035. This was based on calculations of computational power, and estimates of software development.
    At the time I had already been interested in AI development and matters of consciousness for many years. And I was a decent programmer. I already made self modifying code back in 1982. So I made this prediction at a time where AI wasn’t a very popular topic, and in the middle of a decades long futile desert walk without much progress.

    And for 15 about years, very little continued to happen. It was pretty obvious the approach behind for instance Deep Blue wasn’t the way forward. But that seemed to be the norm for a long time.
    But it looks to me that the understanding of how to build a strong AI is much much closer now. We might actually be halfway there!
    I think we are pretty close to having the computational power needed now in AI specific datacenter clusters, but the software isn’t quite there yet.

    I’m honestly not that interested in the current level of AI, although LLM can yield very impressive results at times, it’s also flawed.
    partially self driving cars are kind of irrelevant IMO. But truly self driving cars will make all the difference, and be a cool achievement for current level of AI evolution when achieved.

    So current level AI can be useful, but when we achieve strong AI it will make all the difference!

    source
    • General_Effort@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035.

      That seems to be aging well. But what is the definition of “strong AI”?

      source
      • Buffalox@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Self aware consciousness on a human level. So it’s still far from a sure thing, because we haven’t figured consciousness out yet.
        But I’m still very happy with my prediction, because AI is now at a way more useful and versatile level than ever, the use is already very widespread, and the research and investments have exploded the past decade. And AI can do things already that used to be impossible, for instance in image and movie generation and manipulation.

        But I think the code will be broken soon, because self awareness is a thing of many degrees. For instance a dog is IMO obviously self aware, but it isn’t universally recognized, because it doesn’t have the same degree of selv awareness humans have.
        This is a problem that dates back to 17th century and Descartes, who claimed for instance horses and dogs were mere automatons, and therefore couldn’t feel pain.
        This of course completely in line with the Christian doctrine that animals don’t have souls.
        But to me it seems self awareness like emotions don’t have to start at human level, it can start at a simpler level, that then can be developed further.

        PS:
        It’s true animals don’t have souls, in the sense of something magical provided by a god, because nobody has. Souls are not necessary to explain self awareness or consciousness or emotions.

        source
        • -> View More Comments
    • drspod@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

      It was pretty obvious the approach behind for instance Deep Blue wasn’t the way forward.

      That’s a weird example to pick. What exactly about Deep Blue do you think wasn’t the way forward?

      source
      • Buffalox@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Deep blue was mostly based on raw computational power, with very little ability to actually judge whether a draw was “good” without calculating the possibilities following it.
        As I understand it, it only worked on Chess as a “mathematical” problem, and was incapable of judging strategic positions, except if it had “seen” it before, and already calculated the likely outcomes.
        In short, there was very little intelligence, it was based only on memory and massive calculation power. Which indeed are aspects of intelligence, but only on a very low level.

        source
  • patientpenguin@feddit.org ⁨3⁩ ⁨weeks⁩ ago

    Regarding energy/water use:

    ChatGPT uses 3 Wh. This is enough energy to: […] Play a gaming console for 1 minute.

    If you want to prompt ChatGPT 40 times, you can just stop your shower 1 second early. If you normally take a 5 minute shower, set a timer for 299 seconds instead, and you’ll have saved enough water to justify 40 ChatGPT prompts.

    (Source: …substack.com/…/a-cheat-sheet-for-conversations-a…)

    source
    • scarabic@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      I recall all the same arguments about how much energy and carbon are I bombed in performing one Google search. Does anyone care? Nope.

      I’ve always ignored the energy issue one the assumption that it will be optimized away. Right now, leapfrogging the competition to new levels of functionality is what’s important. But when (if?) these tools settle into true mass usage, the eggheads will have every incentive to focus on optimization to save on operating costs. When that finally starts happening, we will know that AI has passed out of its era as a speculative bet and into prime time as an actual product.

      source