I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.
Comment on AI agents wrong ~70% of time: Carnegie Mellon study
Shayeta@feddit.org 1 week agoIt doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
MangoCats@feddit.it 1 week ago
wise_pancake@lemmy.ca 1 week ago
Agents do that loop pretty well now, and Claude now uses your IDE’s LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.
The tooling has improved a ton in the last 3 months.
Outbound7404@lemmy.ml 1 week ago
A human can review something close to correct a lot better than starting the task from zero.
DreamlandLividity@lemmy.world 1 week ago
It is a lot harder to notice incorrect information in review, than making sure it is correct when writing it.
MangoCats@feddit.it 1 week ago
harder to notice incorrect information in review, than making sure it is correct when writing it.
That depends entirely on your writing method and attention span for review.
Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.
loonsun@sh.itjust.works 1 week ago
Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review
MangoCats@feddit.it 1 week ago
In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.
jsomae@lemmy.ml 1 week ago
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
MangoCats@feddit.it 1 week ago
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.
zbyte64@awful.systems 1 week ago
I usually write 3x the code to test the code itself. Verification is often harder than implementation.
jsomae@lemmy.ml 1 week ago
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
MangoCats@feddit.it 1 week ago
Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.
Writing the proper product code in the first place, that’s the valuable challenge.