Comment on AI agents wrong ~70% of time: Carnegie Mellon study
MangoCats@feddit.it 4 weeks agoIt’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.
zbyte64@awful.systems 4 weeks ago
I usually write 3x the code to test the code itself. Verification is often harder than implementation.
jsomae@lemmy.ml 4 weeks ago
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
MangoCats@feddit.it 4 weeks ago
Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.
Writing the proper product code in the first place, that’s the valuable challenge.
zbyte64@awful.systems 4 weeks ago
Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not.
MangoCats@feddit.it 4 weeks ago
I’ve been R&D forever, so at my level the question isn’t “does the code work?” we pretty much assume that will take care of itself, eventually. Our critical question is: “is the code trying to do something valuable, or not?” We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things…