Comment on AI agents wrong ~70% of time: Carnegie Mellon study
jsomae@lemmy.ml 1 month agoRight, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
MangoCats@feddit.it 1 month ago
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.
zbyte64@awful.systems 1 month ago
I usually write 3x the code to test the code itself. Verification is often harder than implementation.
jsomae@lemmy.ml 1 month ago
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
MangoCats@feddit.it 1 month ago
Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.
Writing the proper product code in the first place, that’s the valuable challenge.
zbyte64@awful.systems 1 month ago
Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not.