You mean things you had to do anyway even if you didn’t use LLMs?
Comment on AI agents wrong ~70% of time: Carnegie Mellon study
zbyte64@awful.systems 2 days agoWhen LLMs get it right it’s because they’re summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.
Honytawk@feddit.nl 2 days ago
PotentialProblem@sh.itjust.works 2 days ago
You’re not wrong, but often I’m just trying to do something I’ve done a thousand times before and I already know the pitfalls. Also, I’m sure I’ve copied code from stackoverflow before.