AI has caused plenty of headaches for developers. This isn’t some culture war shit.
Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
aksdb@lemmy.world 7 hours ago
Does everything have to be a god damn culture war now?! I really don’t give a fuck how people do their work. Judge the outcome not the workflow. No one gave a damn how sloppy some developers hacked together solutions that are widely used. But suddenly it’s an issue if coding agents are used? WTF.
Stop the damn polarization for completely irrelevant things; we get polarized enough for political reasons; we don’t have to bring even more dissent into our communities and fuck each other up with in-fighting.
tonytins@pawb.social 6 hours ago
Voroxpete@sh.itjust.works 54 minutes ago
But that kind of proves their point, right?
Yes, a lot of projects have had issues with contributers who push unreviewed AI slop that they don’t understand, ultimately creating more work for the project. Or with avalanches of AI code review bug reports that do nothing to help. But that’s not what’s happening here.
In this case, the main developer of the project is choosing to use AI, on their own terms, because they find it helpful, and people are giving them shit for it. It’s their project and they feel this technology is beneficial. Isn’t that their call to make? Why are people treating the former and the latter as completely interchangeable scenarios when they’re clearly not? It kind of does suggest that people are coming at this from a more ideological rather than rational perspective.
aksdb@lemmy.world 6 hours ago
That is for each developer to decide, if they can handle it or not.
tonytins@pawb.social 6 hours ago
As I said: judge the result, not the workflow.
I’ve tested AI myself and seen the results. I’ll judge how I see fit.
aksdb@lemmy.world 6 hours ago
I am not talking about the result of the AI. I am talking about Lutris. If the code that ends up in the repo is fine, it doesn’t matter if it was the author, an agent, or an agent followed by a ton of cleanup by the author. If the code is shit it also doesn’t matter if it was an incompetent AI or an incompetent human. Shitty code is shitty, good code is good. The result matters.
prole@lemmy.blahaj.zone 5 hours ago
judge the result, not the workflow.
This kind of seems like bad advice in general. The process to create a result is often extremely important to be aware of. For example, if possible, I would like to not consume products built with slave labor.
Voroxpete@sh.itjust.works 49 minutes ago
The thing is, you’re conflating ethical and practical concerns here. The commenter you’re responding to is clearly talking about the practical aspects of using AI tools.
If you have a fundamental moral issue with AI that is entirely independent of how efficacious it is, that’s fine. That’s a completely reasonable position to hold. But don’t fall into the trap of wanting every use of genAI to be impractical because it aligns with your morality to feel that way.
If this is an ethical stance that you truly hold, you should be willing to believe that using these tools is bad even when they’re effective. But a lot of people instead have to insist that every use of AI is impractical, in the face of any evidence to the contrary, because they’ve talked themselves into believing that on some fundamental level. Like “If AI useful, that means I’m wrong about it being immoral.”
aksdb@lemmy.world 4 hours ago
Depends. If you are generally careful about what products/projects you use and audit them, and you notice that the owner has horrible code hygiene, bad dependency management, etc., then sure. But why judge them for the tools they use? You can still audit the result the same way. And if you notice that code hygiene and dependencies suck, does it matter if they suck because the author mis-used coding agents, because they simply didn’t give a damn, or because they are incapable of doing any better?
You’ve likely stumbled on open source repos in the past where you rolled your eyes after looking into them. At least I have. More than once. And that was long long before we had coding agents. I’ve used software where I later saw the code and was suprised this ever worked. Hell, I’ve found old code of myself where I wondered why this ever worked and what the fuck I’ve been smoking back then.
It’s ok to consider agent usage a red flag that makes you look closer at the code. But I find it unfair to dismiss someones work or abilities just because they use an agent, without even looking at what they produce. And by produce I don’t mean the final binary, but their code.
TrickDacy@lemmy.world 6 hours ago
Culture war? Lol
Yes, the observation that software quality seems negatively impacted by ai use is not allowed to be expressed, because you don’t observe it.
aksdb@lemmy.world 6 hours ago
The culture war part is the call to boycott a project or shit on its author because they use coding agents, as is done throughout these comments. The whole separation into “those who use AI are bad” and “those who hate AI are good” is a culture war. A needless one at that.
TrickDacy@lemmy.world 6 hours ago
TIL fact-based opinions and the arguments that come from them are “culture wars”.
aksdb@lemmy.world 6 hours ago
I also brought facts and objective reasoning, yet I get downvoted. That’s not polarization to you?
prole@lemmy.blahaj.zone 5 hours ago
Is flat vs. round Earth a culture war in your mind?
aksdb@lemmy.world 5 hours ago
The way flat earthers act? Yes. They treat it as a culture war. Just like anti-vaxers.
tonytins@pawb.social 6 hours ago
As I’ve said in an earlier thread, AI over engineers code and hallucinates APIs that don’t exist. Furthermore, hallucinations themselves are a very well studied phenomenon that has proven difficult to combat. People have very legit compliments about AI that you seem to be determined to dismiss as nothing more than a culture war.
aksdb@lemmy.world 6 hours ago
But those issues get determined by reviews and tests. You determined these issues and worked against them, why do you think the author of Lutris is not able to? Neither I nor the author says anyone should use AI produced results as is (i.e vibe code).