Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AI was a common theme at Gamescom 2025, and while some indie teams say it's invaluable, it remains an ethical nightmare

⁨188⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨inclementimmigrant@lemmy.world⁩ to ⁨games@lemmy.world⁩

https://www.eurogamer.net/ai-was-everywhere-at-gamescom-2025

source

Comments

Sort:hotnewtop
  • De_Narm@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    Honestly, it would be weird for any industry to start caring about ethics after all this time.

    Not an endorsement of AI but a criticism of capitalism.

    source
  • Ulrich@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    Probably invaluable if you’re intent on pumping out slop.

    Video games are an art. If you outsource your art to shitty robots…what service is it that you’re providing? What are you doing that I can’t do my fucking self.

    source
    • lime@feddit.nu ⁨2⁩ ⁨weeks⁩ ago

      all parts of videogames are art. sound, visuals, level design, code. you could make the argument that someone who enjoys some of those things but not all of them could more easily get a thing out the door if they could automate one part of it.

      source
      • ArchmageAzor@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Why should a single developer of a game not be allowed to offload making textures for a gravel road or some other brain-numbing task onto AI, and use the time saved to make the main features of the game better?

        source
        • -> View More Comments
    • ampersandrew@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Making the rest of the video game.

      source
    • ArchmageAzor@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Way I see it AI should be allowed to be used on grunt work that stays in the background. Stuff nobody would notice but that would still take up time, so the dev can focus on making the stuff in the foreground better. Indie dev teams can be small, sometimes just one person, and the quality stands to increase if they can offload dumb, time-consuming tasks elsewhere.

      source
  • itkovian@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Eurogamer is shit. You can serve ads without tracking. But, they don’t care.

    Image

    source
    • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      Yeah I hate this trend of you have to subscribe in order to not be tracked. I just agree to the cookies and then block them at the OS level. Get to have my cake and eat it too.

      source
  • salty_chief@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    AI is the future. Sure you can hate on it all you like. Can’t stop progress.

    source
    • SkyezOpen@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Heh. Out of curiosity how many nfts did you buy?

      source
      • salty_chief@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Zero. I took a deep dive into nfts and determined they were problematic.

        source
    • itkovian@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      All I ask is in what way are LLMs progress. Ability to generate a lot of slop is pretty much only thing LLMs are good for. Even that is not really cheap, especially factoring the environmental costs.

      source
      • NuXCOM_90Percent@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        LLMs are actually spectacular for indexing large amounts of text data and pulling out the answer to a query. Combine that with natural language processing and it is literally what we all thought Ask Jeeves was back in the day. If you ever spent time sifting through stack overflow pages or parsing discussion threads, that is what it is good at. And many models actually provide ways to get a readout of the “thought process” and links to pages that support the answer which drastically reduces the impact of hallucinations.

        And many of those don’t necessarily require significant power usage… relative to what is already running in data centers.

        The problem is that people use it and decide it is “like magic” and then insist on using it for EVERYTHING. And you go from “Write me a simple function to interface with this specific API” to “Write me an application to do my taxes”

        Of course, there is also the issue of where training data comes from. Which is why so much of the “generative AI” stuff is so disgusting because it is just stealing copyrighted data left and right. Rather than the search engine style LLMs that mostly just ignore the proverbial README_FBI.txt file.

        And the “this is magic” is on both sides. The evangelists are demonstrably morons. But the rabid anti-AI/“AI” crowd are just as bad with “it gave you a wrong answer, it is worthless”. Think of it less like a magic box and more like asking a question on a message board. You are gonna get a LOT of FUD and it is on you to do additional searches to corroborate when it actually matters.

        source
      • mhague@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        How much do you know about transformers?

        Have you ever programmed an interpreter for interactive fiction / MUDs? It’s a great example of the power that even super tiny models can accomplish.

        Also consider that Firefox or Electron apps require more RAM and CPU and waste more energy than small language models. A Gemma slm can translate things into English using less energy than it requires to open a modern browser.

        source
        • -> View More Comments
      • salty_chief@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Sure everything starts with meager beginnings. The AI you’re upset about existing may find the cure to many diseases. It may save the planet one day.

        source
        • -> View More Comments
    • CosmoNova@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      It can be stopped just like climate change but we won‘t and kill humanity instead apparently.

      source
      • salty_chief@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        We as humans can take steps to lessen our impact on the planet. We cannot stop climate change. The planet by design will always change climates. It has changed without humans influence and it will continue after we are gone.

        source
        • -> View More Comments
    • ABetterTomorrow@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Ya you can, stop using it and don’t. No use, no VC money nor customers. Business baby

      source
      • ArchmageAzor@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I can guarantee you that there will not be a point in time at which everybody on the planet just decides to stop using AI out of the goodness of their hearts.

        source
    • bigmclargehuge@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      This really depends on what you consider “progress”. Some forms of AI are neat pieces of tech, there’s no denying that. However, all I’ve really seen them do in an industrial sense is shrink workforces to save a buck via automation, and produce a noticably worse product.

      That quality is sure to improve, but what won’t change is the fact that real humans with skill and talent are out of a job because of a fancy piece of software. I personally don’t think of that as progress, but that’s just me.

      source
      • gnarly@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Typographers saw the same thing when personal computing in the latter half of the 90s. Almost over night, everyone starting printing their own documentation and comic sans was their canary in the coal mine. It was progress but progress is rarely good for everyone. There’s always a give and a take.

        source
        • -> View More Comments
    • misteloct@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      If someone said this in 1990 it would be just as true as you saying it today. Would you have used generative AI tools for video game development back then?

      source
      • salty_chief@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        💯%. No doubt advancements don’t stop because people are upset about it.

        source
        • -> View More Comments
    • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      That’s like saying that colonies on Mars are the future. In the future colonies on Mars will be the direction things are going, (assuming we don’t global warm ourselves to death first) but we’re not there yet. AI have yet to prove themselves.

      source
  • Skullgrid@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    doesn’t have to be an ethical nightmare. Public domain datasets on local hardware using renewable eletricity, who’s mad now, the artist you already can’t afford to pay because you have no fucking money anyway?

    source
    • very_well_lost@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      AI would be fine if we just changed everything about it

      lol

      source
      • onslaught545@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

        Not all LLMs are the same. You can absolutely take a neural network model and train it yourself on your own dataset that doesn’t violate copyright.

        source
        • -> View More Comments
    • HarkMahlberg@kbin.earth ⁨3⁩ ⁨weeks⁩ ago

      Beyond the copyright issues and energy issues, AI does some serious damage to your ability to do actual hard research. And I'm not just talking about "AI brain."

      Let's say you're looking to solve a programming problem. If you use a search engine and look up the question or a string of keywords, what do you usually do? You look through each link that comes up and judge books by their covers (to an extent). "Do these look like reputable sites? Have I heard of any of them before?" You scroll click a bunch of them and read through them. Now you evaluate their contents. "Have I already tried this info? Oh this answer is from 15 years ago, it might be outdated." Then you pare down your links to a smaller number and try the solution each one provides, one at a time.

      Now let's say you use an AI to do the same thing. You pray to the Oracle, and the Oracle responds with a single answer. It's a total soup of its training data. You can't tell where specifically it got any of this info. You just have to trust it on faith. You try it, maybe it works, maybe it doesn't. If it doesn't, you have to write a new prayer try again.

      Even running a local model means you can't discern the source material from the output. This isn't Garbage In Garbage Out, but Stew In Soup Out. You can feed an AI a corpus of perfectly useful information, but it will churn everthing into a single liquidy mass at the end. And because the process is destructive, you can't un-soup the output. You've robbed yourself of the ability to learn from the input, and put all your faith into the Oracle.

      source
      • Skullgrid@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        The topic is : using AIs for game dev.

        1. I’m pretty sure that generating placeholder art isn’t going to ruin my ability to research
        2. AIs need to be used TAKING THEIR FLAWS INTO ACCOUNT and for very specific things.

        I’m just going to be upfront: AI haters don’t know the actual way this shit works except that by existing, LLMS drain oceans and create more global warming than the entire petrol industry, and AI bros are filling their codebases with junk code that’s going to explode in their faces from anywhere between 6 months to 3 years.

        source
        • -> View More Comments
      • Mika@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        you can’t be critical about the answer

        You actually can, and you should be. And the process is not destructive since you can always undo in tools like cursor, or discard in git.

        Besides, you can steer a good coding LLM in a right direction. The better you understand what are you doing - the better.

        source
        • -> View More Comments
    • eldebryn@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Out of legit curiosity, how many models do you know trained exclusively on public domain data, which are actually useful?

      source
      • lime@feddit.nu ⁨2⁩ ⁨weeks⁩ ago

        anything trained on common corpus. which, oddly, is harder to find than the actual training data.

        source
        • -> View More Comments