Comment on AI agents wrong ~70% of time: Carnegie Mellon study
zbyte64@awful.systems 1 week agoMaybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not.
Comment on AI agents wrong ~70% of time: Carnegie Mellon study
zbyte64@awful.systems 1 week agoMaybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not.
MangoCats@feddit.it 1 week ago
I’ve been R&D forever, so at my level the question isn’t “does the code work?” we pretty much assume that will take care of itself, eventually. Our critical question is: “is the code trying to do something valuable, or not?” We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things…
zbyte64@awful.systems 1 week ago
Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers.
MangoCats@feddit.it 1 week ago
Yeah, sometimes the requirements write themselves and in those cases successful execution is “on the critical path.”
Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that’s for sure.
zbyte64@awful.systems 1 week ago
When requirements are “Whatever” then by all means use the “Whatever” machine: eev.ee/blog/2025/07/03/the-rise-of-whatever/
And then look for a better gig because such an environment is going to be toxic to your skill set. The more exacting the shop, the better they pay.