Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
pheelicks@lemmy.zip 1 day agoThanks for taking the time to reply.
Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.
Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..
I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.
I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?
Senal@programming.dev 23 hours ago
Both.
The reasons are quite hard to describe, which is why it’s such a trap, but if you spend some time reviewing LLM code you’ll see what I mean.
One reason is that it isn’t coding for logical correctness it’s coding for linguistic passability.
Internally there are mechanisms for mitigating this somewhat, but its not an actual fix so problems slip through.
The latter, if you give it the exact same input in the exact same conditions, it’s not guaranteed to give you the same output.
The fact that its sometimes close to the same actually makes it worse because then you can’t tell at a glance what has changed.
It also isn’t a simple as using a diff tool, at least for anything non-trivial, because it’s variations can be in logical progression as well as language. Meaning you need to track these differences across the whole contextual area.
As I said, there are mitigations, but they aren’t fixes.