What they’re leaving out here is that this was specifically bug fixing tasks. If you’re familiar with using it for coding you’ll know it’s horrendous at that sort of thing.
Comment on AI slows down some experienced software developers, study finds
kescusay@lemmy.world 8 months ago
Experience software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.
CabbageRelish@midwest.social 8 months ago
FreedomAdvocate@lemmy.net.au 8 months ago
I’ve found it to be great at writing unit tests too.
I use github copilot in VS and it’s fantastic. It just throws up suggestions for code completions and entire functions etc, and is easily ignored if you just want to do it yourself, but in my experience it’s very good.
IndustryStandard@lemmy.world 8 months ago
Everyone on Lemmy is a software developer.
billwashere@lemmy.world 8 months ago
Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.
In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.
kescusay@lemmy.world 8 months ago
Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.
FreedomAdvocate@lemmy.net.au 8 months ago
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
Not quite true - GitHub Copilot in VS for example can be given access to your entire repo/project/etc and it then “knows” how things tie together and work together, so it can get more context for its suggestions and created code.
kescusay@lemmy.world 8 months ago
That’s still not actually knowing anything. It’s just temporarily adding more context to its model.
And it’s always very temporary. I have a yarn project I’m working on right now, and I used Copilot in VS Code in agent mode to scaffold it as an experiment. One of the refinements I included in the prompt file to build it is reminders throughout for things it wouldn’t need reminding of if it actually “knew” the repo.
- I had to constantly remind it that it’s a yarn project, otherwise it would inevitably start trying to use NPM as it progressed through the prompt.
- For some reason, when it’s in agent mode and it makes a mistake, it wants to delete files it has fucked up, which always requires human intervention, so I peppered the prompt with reminders not to do that, but to blank the file out and start over in it.
- The frontend of the project uses TailwindCSS. It could not remember not to keep trying to downgrade its configuration to an earlier version instead of using the current one, so I wrote the entire configuration for it by hand and inserted it into the prompt file. If I let it try to build the configuration itself, it would inevitably fuck it up and then say something completely false, like, “The version of TailwindCSS were using is still in beta, let me try downgrading to the previous version.”
I’m not saying it wasn’t helpful. It probably cut 20% off the time it would have taken me to scaffold out the app myself, which is significant. But it certainly couldn’t keep track of the context provided by the repo, even though it was creating that context itself.
Working with Copilot is like working with a very talented and fast junior developer whose methamphetamine addiction has been getting the better of it lately, and who has early onset dementia or a brain injury that destroyed their short-term memory.
stsquad@lemmy.ml 8 months ago
Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.
MangoCats@feddit.it 8 months ago
I have limited AI experience, but so far that’s what it means to me as well: helpful in very limited circumstances.
Mostly, I find it useful for “speaking new languages” - if I try to use AI to “help” with the stuff I have been doing daily for the past 20 years? Yeah, it’s just slowing me down.
vrighter@discuss.tchncs.de 8 months ago
and the only reason it’s not slowing you down on other things is that you don’t know enough about those other things to recognize all the stuff you need to fix
balder1991@lemmy.world 8 months ago
I like the saying that LLMs are good at stuff you don’t know. That’s about it.
Zetta@mander.xyz 8 months ago
FreedomAdvocate is right, IMO the best use case of ai is things you have an understanding of, but need some assistance. You need to understand enough to catch atleast impactful errors by the llm
FreedomAdvocate@lemmy.net.au 8 months ago
They’re also bad at that though, because if you don’t know that stuff then you don’t know if what it’s telling you is right or wrong.
fafferlicious@lemmy.world 8 months ago
I…think that’s their point. The only reason it seems good is because you’re bad and can’t spot that is bad, too.
MangoCats@feddit.it 8 months ago
Like search engines, and libraries…
lIlIlIlIlIlIl@lemmy.world 8 months ago
Exactly what you would expect from a junior engineer.
Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.
Something something craftsmen don’t blame their tools
corsicanguppy@lemmy.ca 8 months ago
Exactly what you would expect from a junior engineer.
Except junior engineers become seniors. If you don’t understand this … are you HR?
lIlIlIlIlIlIl@lemmy.world 8 months ago
They might become seniors for 99% more investment. Are you an MBA?
FreedomAdvocate@lemmy.net.au 8 months ago
Interesting downvotes, especially how there are more than there are upvotes.
Do people think “junior” and “senior” here just relate to age and/or time in the workplace? Someone could work in software dev for 20 years and still be a junior dev.
Feyd@programming.dev 8 months ago
AI tools are way less useful than a junior engineer, and they aren’t an investment that turns into a senior engineer either.
FreedomAdvocate@lemmy.net.au 8 months ago
They’re tools that can help a junior engineer and a senior engineer with their job.
MangoCats@feddit.it 8 months ago
AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really “graduated” to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers “find their niche” doing something other than engineering with their engineering job titles, and that’s great, but don’t ever trust them to build you a bridge or whatever it is they seem to have been hired to do.
Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?
Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn’t seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.
The question I have is: will AI continue to write “human compatible” software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.
AA5B@lemmy.world 8 months ago
I’m seeing exactly the opposite. It used to be the junior engineers understood they had a lot to learn. However with AI they confidently try entirely wrong changes. They don’t understand how to tell when the ai goes down the wrong path, don’t know how to fix it, and it takes me longer to fix.
So far ai overall creates more mess faster.
Don’t get me wrong, it can be a useful tool you have to think of it like autocomplete or internet search. Just like those tools it provides results but the human needs judgement and needs to figure out how to apply the appropriate results.
My company wants metrics on how much time we’re saving with ai, but
- I have to spend more time helping the junior guys out of the holes dug by ai
- it’s just another tool. There’s not really a defined task or set time. If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?
Feyd@programming.dev 8 months ago
Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?
LOL sure
lIlIlIlIlIlIl@lemmy.world 8 months ago
Is “way less useful” something you can cite with a source, or is that just feelings?
Feyd@programming.dev 8 months ago
It is based on my experience, which I trust immeasurably more than rigged “studies” done by the big LLM companies with clear conflict of interest.
errer@lemmy.world 8 months ago
Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…
finalarbiter@lemmy.dbzer0.com 8 months ago
This line of thought is short sighted. Your senior engineers will eventually retire or leave the company. If everyone replaces junior engineers with ai, then there will be nobody with the experience to fill those empty seats. Then you end up with no junior engineers and no senior engineers, so who is wrangling the ai?
Feyd@programming.dev 8 months ago
The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.
lIlIlIlIlIlIl@lemmy.world 8 months ago
Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.
You can see why companies are tripping over themselves to push this new modality.
5too@lemmy.world 8 months ago
The difference being junior engineers eventually grow up into senior engineers.
lIlIlIlIlIlIl@lemmy.world 8 months ago
Does every junior eventually achieve becoming a senior?
5too@lemmy.world 8 months ago
No, but that’s the only way you get senior engineers!
sugar_in_your_tea@sh.itjust.works 8 months ago
Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn’t worked very well. One of my less technical coworkers tried to vibe code something and it didn’t work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.
kescusay@lemmy.world 8 months ago
I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.
sugar_in_your_tea@sh.itjust.works 8 months ago
I think the term pretty accurately describes what is going on: they don’t know how to code, but they do know what correct output for a given input looks like, so they iterate with the LLM until they get what they want. The coding here is based on vibes (does the output feel correct?) instead of logic.
I don’t think there’s any problem with the term, the problem is with what’s going on.
kescusay@lemmy.world 8 months ago
That’s fair. I guess what I hate is what the term represents, rather than the term itself.