Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash

<- View Parent
aksdb@lemmy.world ⁨1⁩ ⁨day⁩ ago

Thanks for that long answer. I agree completely with the second half of it. I also agree with most of the first half of it, but I have to add a remark to it:

My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

That is mostly true, but also depends on the usage. You don’t have to tell an agent to “develop feature X” and then go for a coffee. You can issue relatively narrow scoped prompts that yield small amounts of changes/code which are far easier to review. You can work that way in small iterations, making it completely possible to follow along and adjust small things instead of getting a big ball of mud to entangle.

And while it’s true that not everyone is able to vet code, that was also true before and without coding agents. Yet people run random curl-piped-to-bash commands they copy from some website because it says it will install whatever. They install something from flathub without looking at the source (not even talking about chain of trust for the publishing process here). There is so much bad code out there written by people who are not really good engineers but who are motivated enough to put stuff together. They also made and make ugly mistakes that are hard to spot and due to bad code quality hard to review.

The main risk of agents is, that they also increase the speed of these developers which means they pump out even more bad code. But the underlying issue existed before and agents don’t automatically mean something is bad. That would also be dangerous to believe that, because that might enforce even more the feeling of security when using a piece of code that was (likely) written without any AI influence. But that’s just not true; this code could be as harmful or even more harmful. You simply don’t know if you don’t review it. And as you said: most people don’t.

source
Sort:hotnewtop