Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash

<- View Parent
Senal@programming.dev ⁨1⁩ ⁨day⁩ ago

Let’s assume we’re skipping the ethical and moral concerns about LLM usage and just discuss the technical.

it makes an impression on me as if human code would be free of such errors

Nobody who knows anything about coding is claiming human code is error free, that’s why code reviews, testing and all the other aspects of the software development lifecycle exist.

To me it sounds like nobody should ever trust AI code

Nobody should trust any code unless it can be verified that it does what is required consistently and predictably.

because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst

This is a known thing, paranoia doesn’t really apply here, only subjectively appropriate levels of caution.

Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.

Whether or not these problems can be overcome (or mitigated) remains to be seen, but at the moment it still requires additional effort around the LLM parts, which is why hiding them is counterproductive.

At some point there is no difference anymore between “it looks fine” and “it is fine”.

This is important because it’s true, but it’s only true if you can verify it.

This whole issue should theoretically be negated by comprehensive acceptance criteria and testing but if that were the case we’d never have any bugs in human code either.


Personally i think the “uncanny valley code” issue is an inherent part of the way LLM’s work and there is no “solution” to it, the only option is to mitigate as best we can.

I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.

source
Sort:hotnewtop