OP, please add a fat disclaimer at the bottom that this is from Anthropic, a major AI company.
How AI assistance impacts the formation of coding skills
Submitted 3 weeks ago by Beep@lemmus.org to technology@lemmy.world
https://www.anthropic.com/research/AI-assistance-coding-skills
Comments
MonkderVierte@lemmy.zip 3 weeks ago
sem@piefed.blahaj.zone 2 weeks ago
Also, society, please learn to read all sources critically.
irate944@piefed.social 3 weeks ago
I would love to read an independent study on this, but this is from Anthropic (the guys that make Claude) so it’s definitely biased.
Speaking for myself, I’ve been using LLM’s to help out with jumps in small gaps of knowledge. Like for example, I know what I need to do, I just don’t know/remember the specific functions or libraries that I need to do that in Python. LLM is extremely useful for these moments; and it’s faster than searching and asking on forums. And to be transparent, I did learn a few tricks here and there.
But if someone lets the LLM do most of the work - like vibe coders - I doubt they will learn anything.
suicidaleggroll@lemmy.world 2 weeks ago
I do the same. I start with the large task, break it into smaller chunks, and I usually end up writing most of them myself. But occasionally there will be one function that is just so cookie-cutter, insignificant to the overall function of the program, and outside of my normal area of experitise, that I’ll offload that one to an LLM. They actually do pretty well for tasks like that, when given a targeted task with very specific inputs and outputs, and I can learn a bit by looking at what it ended up generating. I’d say it’s only about 5-10% of the code that I write that falls into the category where an LLM could realistically take it on though.
melfie@lemy.lol 2 weeks ago
I’m a senior dev who has been the tech lead over various products throughout my career and have always been really engaged. In my current software engineer role, though, most of the important product and technology decisions are made behind closed doors and handed down to my team. I’ve found that any given idea I have that isn’t a direct and logical conclusion of a decision made in the ivory tower has a 99% chance of getting shot down or ignored. So, my job is more or less to pump out whatever drivel they want every day instead of being someone driving the development of products I’m building. Unsurprisingly, AI really helps a lot with that. The fact that AI reduces my engagement is a feature, not a bug, because every time I become engaged and start getting excited about an idea, it’s always met with indifference or even disdain, leading to frustration and depression. AI has definitely improved my mental health because I can give a lot less of a shit. The CEO sent out an e-mail the other day saying our #1 priority is to use AI to make ourselves more productive.
My response:
whereIsTamara@lemmy.org 2 weeks ago
My boss make lots of really odd and… just bad requests. Half the time I can’t even understand what he’s asking because he seems to enjoy being super vague. For a few months now I just copy and paste his request to codex and let it rip. It’s usually pretty accurate. At least… the PRs get approved… and that’s all that matters.
jj4211@lemmy.world 2 weeks ago
Familiar but with a difference in my case.
I’ve spent my entire career alternating between two experiences.
One is being grilled why I an delivering what I think should be done instead of what the executives told me to do.
The other is getting awards and promotions when it turns out that I was right and the customers loved it.
It happened to work for me to do it my way, though my executives have usually simultaneously rented the implication they don’t have good vision, they also know how to leverage my success for themselves. Particularly this most recent promotion has been stalled to reward better drones instead, but it’s looking like they have to pivot back to rewarding the folks the paying customers actually like instead of those that feed the executive egos.
Calabast@lemmy.ml 3 weeks ago
I’m trying out using Claude on a problem at work that has been frustrating; lots of unexpected edge cases that require big changes.
I definitely know less about my “solution” (it’s not done yet, but it’s getting close) than if I actually sat down and did it all myself. And it’s taken a lot lot of back and forth to get to where I am.
It’d probably have gone better if, once Claude provided me a result, I went through it completely and made sure I understood every aspect of it, but man when it just spits out a full script, the urge to just run it and see if it works is strong. And if it’s close but not quite right, then the feeling is “well, let me just ask about this one part while I’m already here” and then you get a new complete script to try. And that loop continues, and I never get around to really going through and fully understanding the code.
ElBarto@piefed.social 3 weeks ago
Do you tell Claude to make a plan first?
That helps me tremendously. Whenever something needs to be modified, I tell it to update the plan first, and to stick to the plan.
That way, Claude doesn’t rewrite code that has already been implemented as part of the plan.
And understanding the plan helps understanding the code.
Sometimes if I know there will be a lot of code produced, I’ll tell it to add a quick comment on every piece it adds or modifies with a reference to the step in the plan it refers to. Makes code reviewing much more pleasant and easier to follow. And the bugs and hallucinations stick out more too.
priapus@piefed.social 3 weeks ago
Agreed, using a planning phase makes a huge difference. It will break the implementation into steps, making reviewing or manually refactoring parts of the code far more easily.
GnuLinuxDude@lemmy.ml 3 weeks ago
importantly, in our own funded study, we found that those who used our product the most did the best
Lulzagna@lemmy.world 3 weeks ago
“you’re holding it wrong”