This feels like this is an unreasonable take… Is it really egotistical?
Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
qaeta@lemmy.ca 2 days agoI mean, a reasonable person would choose to stop rather than becoming an unethical egotistical fuckwit…
rookbrood@lemmy.world 2 days ago
yakko@feddit.uk 2 days ago
From what I’m reading it definitely feels like an attempt to highroad the critics by claiming personal issues, while neatly skating past any of the actual concerns raised. That does seem quite self absorbed, if not precisely egotistical.
rookbrood@lemmy.world 2 days ago
I think I would still weigh that against the work he put out. I have used that work alot so, not to completely ignore if some behavior is bad, I still take it into account.
kungen@feddit.nu 2 days ago
A reasonable person would have forked the repo and maintained the project themselves, or used something else. I’m also deathly allergic to LLM code, but I don’t come into someone else’s free project and tell them how they should live their life.
But I agree that it was bad style to remove the co-author attribute. He should have just said “yeah, I slop, so what?”
meowki@lemmy.world 2 days ago
FOSS projects are built on trust. The developer removing the co-author attribute due to backlash followed by seemingly taunting people by telling them good luck to identify which is LLM code and which is human code is just plain bad behavior.
Own what you do. Be transparent with the community. The backlash isn’t going to kill you. But you dig yourself a deeper grave by openly admitting to obfuscate the development process of a FOSS project.
My personal issue is his choice of the model used. He’s chosen Anthropic which is complicit in a war, whose AI is being used by the military to further military interests. Out of many more ethical models out there, why go with that one specifically?
shynoise@lemmy.world 2 days ago
In case this isn’t a rhetorical question, Claude is considered to be leading the pack for developer functionality. I can’t comment on the overall decision process, but it’s clear that lots of people a) don’t think about ethical concerns b) don’t prioritize them in decisions or c) align.
All we can really do is ask that people consider these things.
Auli@lemmy.ca 2 days ago
I mean in Hus rant he literally said he choose an AI company that doesn’t work with the military. Which is funny. I mean they took a stance against them in the future bit they did work with them.
FauxLiving@lemmy.world 1 day ago
Because it is the better tool in the usecase that he is engaging with.
You’re setting up an impossible standard, one that you don’t follow yourself.
You know that Social Media is used to spread propaganda throughout the world, leading to hate crimes, genocides, wars, sexual exploitation etc. You’re still using social media. There are many more ethical ways to talk to people, why go with social media specifically?
All you’ve discovered is that there is no ethical consumption under capitalism. You can take anything that a person does and trace the supply chain to find examples of wholly immoral behavior. Unless you plan on living in a cave, you’re going to appear like a hypocrite at the very least if you start picking apart the choices of others under that lens.
meowki@lemmy.world 1 day ago
I wish you had addressed the first two paragraphs I wrote, as I feel they’re a bit more relevant and tie into the developer’s chosen behavior more than his choice of an AI helper.
But an LLM isn’t this. An LLM isn’t a platform. It’s a utility tool. One for creation. A previous commenter pointed out that the developer tried to pick a model that isn’t helping the military. So this should show the developer has an ethical stance. Maybe this happened before Anthropic began aiding the military.
I wonder if his choice has been or if it will be changed.
G_M0N3Y_2503@lemmy.zip 1 day ago
I’d be interested to know where you draw the line of code ownership. Arguably FOSS is the place where projects are most likely to become a Ship of Theseus.
From my perspective AI slop is pretty unusable as it comes out, but can be an approximate starting point. It seems generous to call an LLM a coauthor, I’d be more likely to have a long list of Stack Overflow commenters as coauthors first.