Comment on AI agents wrong ~70% of time: Carnegie Mellon study
Nalivai@discuss.tchncs.de 3 days agoKeep doing what you do. Your company will pay me handsomely to throw out all your bullshit and write working code you can trust when you’re done. If your company wants to have a product in the future that is.
kameecoding@lemmy.world 3 days ago
Lmao, okay buddy
Nalivai@discuss.tchncs.de 3 days ago
The person who uses fancy autocomplete to write their code will be exactly the person who thinks they’re better than everyone. Those traits are correlated.
kameecoding@lemmy.world 3 days ago
Do you use an IDE for writing your code or do you use a notepad like a “real” programmer? An IDE like Intellij has fancy shit like generating getters, setters, constructors, equals hashscode, you should never use those, real programmers write those by hand.
Your attention detail is very good btw, which I am ofc being sarcastic about because if you had any you’d have noticed I have never said I write my code with chat gpt, I said Unit tests, sql for unit tests.
Ofc attention to detail is not a requirement of software engineering so you should be good. (This was also sarcasm I feel like you need this to be pointed out for you).
Nalivai@discuss.tchncs.de 3 days ago
Were you prone to this weird leaps of logic before your brain was fried by talking to LLMs, or did you start being a fan of talking to LLMs because your ability to logic was…well…that?
PotentialProblem@sh.itjust.works 3 days ago
I’ve been in the industry awhile and your assessment is dead on.
As long as you’re not blindly committing the code, it’s a huge time saver for a number of mundane tasks.
It’s especially fantastic for writing throwaway tooling. Need data massaged a specific way? Ez pz. Need a script to execute an api call on each entry in a spreadsheet? No problem.
The guy above you is a nutter. Not sure if people haven’t tried leveraging LLMs or what. It has a ton of faults, but it really does speed up the mundane work. Also, clearly the person is either brand new to the field or doesn’t even work in it. Otherwise they would have seen the barely functional shite that actual humans churn out.
Part of me wonders if code organization is going to start optimizing for interpretation by these models rather than humans.
zbyte64@awful.systems 3 days ago
When LLMs get it right it’s because they’re summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.
PotentialProblem@sh.itjust.works 3 days ago
You’re not wrong, but often I’m just trying to do something I’ve done a thousand times before and I already know the pitfalls. Also, I’m sure I’ve copied code from stackoverflow before.
Honytawk@feddit.nl 3 days ago
You mean things you had to do anyway even if you didn’t use LLMs?