I’m actually quite enjoying watching the LLM evangelists fall into the trough of despair after their initial inflated expectations of what they thought stochastic text generation would achieve for the business.
AI isn’t ready to replace human coders for debugging, researchers say
Submitted 1 day ago by cm0002@lemmy.world to technology@lemmy.world
Comments
NigelFrobisher@aussie.zone 1 day ago
thefluffiest@feddit.nl 1 day ago
So, AI gets to create problems, and actually capable people get to deal with the consequences. Yeah that sounds about right
WanderingThoughts@europe.pub 1 day ago
And it’ll be used to suppress wages, because “you’re not making new stuff, just fixing some problems in existing code.” That you have to rewrite most of it is conveniently not counted.
That’s at least what was tried with movie writers.
sach@lemmy.world 1 day ago
Most programmers agree debugging can be harder than writing code, so basically the easy part is automated, but the more challenging and interesting parts, architecture and the debugging remain for programmers. Still it’s possible they’ll try to sell it to programmers as less work.
resipsaloquitur@lemm.ee 1 day ago
So we “fixed” the easiest part of software development (writing code) and now humans have to clean up the AI slop.
I’ll bet this lovely new career field comes with a pay cut.
IllNess@infosec.pub 23 hours ago
I would charge more. Fixing my own code is easier than fixing someone elses code.
I think I might go insane if that was my career.
JordanZ@lemmy.world 23 hours ago
They really want to enforce that quote. Let the idiot write the code and the more experienced person debug it. I feel like we’ve seen this with airline pilots already. Huge shortage mainly caused by retirement and regulation changes making it harder to get into the field. I guess their hope is by the time that happens with programmers AI doesn’t suck.
At least this won’t be true anymore.
jj4211@lemmy.world 23 hours ago
I occasionally check what various code generators will do if I don’t immediately know the answer is almost always wrong, but recently it might have been correct, but surprisingly convoluted. It had it broken down into about 6 functions iterating through many steps to walk it through various intermediate forms. It seemed odd to me that such a likely operation was quite so involved, so I did a quick Internet search, ignored the AI generated result and saw the core language built-in designed to handle my use case directly. There was one detail that was not clear in the documentation, so I went back to the LLM to ask that question and it gave the exact wrong answer.
I am willing to buy that with IDE integration or can probably have much richer function completion for small easy stuff I know to do and save some time, but I just haven’t gotten used to the idea of asking for help on things I already know how to do.
ThePowerOfGeek@lemmy.world 22 hours ago
I’ve found the same thing.
Whenever I ask an LLM for a pointer, I end up spending just as long (if not longer) refining the question than just figuring it out myself it’s doing a search on SO it in other online resources.
But even the IDE integration is getting annoying. I write a class with some functionality baked in, and the whole time it’s promoting me with a shit load of irrelevant suggested code. I get the class done, then I go to spin up a unit test. It knows which class I’m trying to create a unit test for, which is cool. But then the suggested code is usually completely wrong or it’s much more convoluted than it needs to be. In the latter case, the first several characters of the suggested code is good, but then there’s several lines after it of shite. And hitting tab injects all of it in, which then requires me to delete it all. So almost every time I end up hitting escape anyway.
I’ve heard a few people rave about ‘vibe coding’ - usually people with no or little programming experience. I have to assume that generated code was either for very simple atomic actions and/or it’s spaghettified, inefficient garbage.
cyrano@lemmy.dbzer0.com 1 day ago
But trust me Bro, AGI is around the corner. In the meantime have this new groundbreaking feature decrypt.co/…/chatgpt-total-recall-openai-memory-u… /s
bappity@lemmy.world 1 day ago
LLMs are so fundamentally different to AGI, it’s a wonder people believe that balderdash
hera@feddit.uk 1 day ago
As a very experienced python developer, I have tried using chatgpt for debugging and vibe coding multiple times and you just end up going in circles and never get to a working solution. It ends up being a lot faster just to do it yourself
gigachad@sh.itjust.works 1 day ago
Absolutely agree. I just use it for some simple one liners like “every nth row in a pandas datagrams slice a string from x to y if column z is True” or something like that. These logics take time to write, and GPT usually comes up with a right solution or one that doesn’t need a lot of modification.
But debugging or analyzing an error? No thanks
AdamEatsAss@lemmy.world 1 day ago
I have on multiple occasions told it exactly what the error is and how to fix it. The AI agrees, apologizes, and gives me the same broken code again. It takes the same amount of time to describe the error as it would have for me to fix it.
kyub@discuss.tchncs.de 1 day ago
“AI” is good for pattern matching, generating boiler plate / template code and text, and generating images. That’s about it. And it’s of course often flawed/inaccurate so it needs human oversight. Everything else is like a sales scam.
nick@midwest.social 21 hours ago
No shit.
aaron@lemm.ee 1 day ago
I’m full luddite on this. And fuck all of us.
Serinus@lemmy.world 22 hours ago
“Give me some good warning message css” was a pretty nice use case. It’s a nice tool that’s near the importance of Google search.
But you have to know when its answers are good and when they’re useless or harmful. That requires a developer.
Simulation6@sopuli.xyz 1 day ago
Can AI fix itself so that it gets better at a task? I don’t see how that could be possible, it would just fall into a feed back loop where it gets stranger and stranger.
Personally, I will always lie to AI when asked for feed back.taladar@sh.itjust.works 1 day ago
It is worse. People can’t even fix AI so it gets better at a task.
jj4211@lemmy.world 23 hours ago
That’s been one of the things that has really stumped a team that wanted to go all in on some AI offering. They go to customer evaluations and really there’s just nothing they can do about the problems reported. They can try to train and hope for the best, but that likely won’t work and could also make other things worse.
VagueAnodyneComments@lemmy.blahaj.zone 22 hours ago
Ars Technica would die of an aneurysm if it stopped posting about generative AI for even 30 seconds
as they’re the authority on tech, and all they write about is shitty generative AI from 2017, that means shitty generative AI from 2017 is the only tech worth writing about
BangelaQuirkel@lemmy.world 1 day ago
Are those researchers human or is this just an Ai that’s too lazy to do the work?
latenightnoir@lemmy.blahaj.zone 1 day ago
Well, now they’re just subverting expectations left ant right, aren’t they!
bappity@lemmy.world 1 day ago
the tool can’t replace the person or whatever
FauxPseudo@lemmy.world 1 day ago
But the only way to learn debugging is to have experience coding. So if we let AI do the coding then all the entry level coding jobs go away and no one learns to debug.
This isn’t just a code thing. This is all kinds of professions. AI will kill the entry level which will prevent new people from getting experience which will have downstream effects throughout entire industries.
MigratingApe@lemmy.dbzer0.com 1 day ago
It already started happening before LLM AI. Have you heard the joke that we were teaching our parents how to use printers and PCs with mouse and keyboard and now we have to do the same with our children? It’s really not a joke. We are the last generation that have seen it all evolving before our eyes, we know the fundamentals of each layer of abstraction the current technology is built upon. It was natural process for us to learn all of this and now suddenly we expect “fresh people” to grasp 50 years or so of progress in 5 or so years?
Interesting times ahead of us.
Evotech@lemmy.world 17 hours ago
Good point
AdamEatsAss@lemmy.world 1 day ago
Have you used any AI for programming? There is 0 chance entry level jobs will be replaced. AI only works well if what it needs to do is well defined, as a dev that is almost never the case. Also companies understand that to create a senior dev they need a junior dev they can train. Also cooperations do not trust Google, openAI, meta, ect with their intellectual property. My company made it a firedable offense if they catch you uploading IP to an AI.
FauxPseudo@lemmy.world 1 day ago
We live in a world where every company wants people that can hit the ground running, requires 5 years of experience for an entry level job on a language that’s only been out for three years. On the job training died long ago.
metaldream@sopuli.xyz 1 day ago
The junior devs are my job are way better at debugging than AI, lol. Granted they are top talent hires because no one else can break in these days.
zenpocalypse@lemm.ee 15 hours ago
In my experience, LLMs are good for code snippets and input on best practices.
I use it as a tool to speed up my work, but I don’t see it replacing even entry jobs any time soon.