Coders spent more time prompting and reviewing AI generations than they saved on coding. On the surface, METR's results seem to contradict other benchmarks and experiments that demonstrate increases in coding efficiency when AI tools are used. But those often also measure productivity in terms of total lines of code or the number of discrete tasks/code commits/pull requests completed, all of which can be poor proxies for actual coding efficiency. These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to "settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn." While those factors may not apply in "many realistic, economically relevant settings" involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.
Their sample size was 16 people…
skulkbane@lemmy.world 1 day ago
The main issue i have with AI coding, hasn’t been the code. Its a bit ham fisted and overly naive, it is as if it’s speed blind. The main issue is that some of the code is out of date using functions that are deprecated etc, and it seems to be mixing paradigms and styles across languages in a very frustrating? way.
turtlesareneat@discuss.online 1 day ago
Yep I’ve got a working iOS app, a v.2 branched and on the way, with a ton of MapKit integrations. Unfortunately I’m getting depreciation errors and having to constantly remind the AI that it’s using old code, showing it examples of new code, and then watching it forget as we keep talking.
Still, I have a working iOS app, which only took a few hours. When Jack Dorsey said he’d vibe coded his new app in a long weekend, I’m like, hey me too.
Couldbealeotard@lemmy.world 22 hours ago
LLMs can’t forget things because they are not capable of memory.