By telling people he expected this and obfuscating the authorship afterwards, he is doing damage in the form of eroding trust for a tool that has otherwise proven reliable.
Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
ipkpjersi@lemmy.ml 1 day ago
Honestly, unfortunately, I agree. It IS unfortunately helpful, and if you’re a competent developer using AI tooling, you can make sure it doesn’t generate slop.
tonytins@pawb.social 23 hours ago
ipkpjersi@lemmy.ml 1 hour ago
He removed the authorship specifically because he was attacked for using AI.
People were already going after him for using AI.
I have no problem with him using AI personally, because I trust that he is a competent enough dev if he has built and maintained this program thus far. If you don’t trust him specifically because he’s using AI now, and you don’t trust him to review the code the AI produces, then that’s your choice.
tonytins@pawb.social 1 hour ago
“Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago.”
He knew it was going to be an issue. This wasn’t about being attacked.
FauxLiving@lemmy.world 6 hours ago
It seems like you’re glossing over the fact that he was including authorship until he was targeted with a harassment campaign by the anti-ai nutjobs.
He removed authorship in response to being harassed. His point was that including authorship has only led to harassment which takes resources away from the actual project. If a person can’t tell that the code was AI generated with out a ‘Generated by Claude Code’ tag then their complaints about AI’s quality seem to fall flat.
Voroxpete@sh.itjust.works 1 day ago
As I’ve said elsewhere here, I really don’t have a problem with people holding a moral stance against the use of genAI. It’s fine to just say “However useful this might be, I don’t want to see it used because I think it has too many ethical costs/consequences.” But blanket accusing all work that involved genAI in any capacity of being “slop” isn’t holding a moral stance, it’s demanding that reality conform to your beliefs; “I hate this, therefore it must be terrible in every respect.”
If you truly hold a well founded ethical stance against the use of genAI, that stance shouldn’t be threatened by people doing good and effective work with genAI, because it’s effectiveness should have nothing to do with your objections.
Seefoo@lemmy.world 1 day ago
probably one of the most reasonable comments here.
Auli@lemmy.ca 21 hours ago
If he is using it for backlog because he is swamped do you honestly think he is verifying the code.
InternetCitizen2@lemmy.world 20 hours ago
but that’s mostly because of how companies abuse it and less because of the technology itself.
In any other context this is tech to help us in our post scarce future.
FauxLiving@lemmy.world 5 hours ago
I agree.
If you read the anti-AI comments you’ll find that when they say ‘AI’ they mean ‘LLMs fine tuned to be chatbots’ and ‘Diffusion models which generate bitmaps or video files’
They’re seemingly ignorant of all of the other things that Transformers and Deep Neural Networks are used for.
Remember how there were all of these projects trying to crowd source an algorithm to fold proteins given an amino acid sequence? Well, a trained neural network ‘AI’ called Alphafold was created and it can complete the task with >90% accuracy. THEN, using a network like AlphaFold another group of scientists made a diffusion model that could be prompted with protein parameters and then generate the string of amino acids which would fold into that protein.
I find it hard to believe that the ‘fuck AI’ crowd understands that ‘AI’ is completely separate from the capitalist frenzy over chatbots and image generation. The vast majority of their complaints are not about the technology, they are about assholes who have a lot of money that are abusing and overhyping the technology in order to get more money.
veniasilente@lemmy.dbzer0.com 5 hours ago
Don’t excuse the technology. It was created to be useless and wasteful. Every question on an AI engine helps burn down entire forests. Every AI that is kept awaken and serving dries the lagoons and rivers of an indigenous tribe, if not a small town. Every model is built upon the sustained theft of art, code and identity, to the point the main financers are proud of it and using it as legal justification.
People who are evil, made a tool for evil, and those using the tool of evil are doing little more than enabling evil. Number must go up.
ipkpjersi@lemmy.ml 1 hour ago
I’m excusing the technology because it’s specifically not useless, I have found uses for it. I’m not going to demonize the technology when the companies that are abusing it are nearly the entire problem. It’s about the scope of resources required and the job loss produced by this tech.
Do you really think running LLMs locally on your GPU is causing irreversible societal harm?
I know, it’s not popular to say AI isn’t the problem, but honestly, the companies abusing it are the problem.