It would be interesting to know where your friend works and what kind of application it’s on, because your comment is the first time I’ve ever heard of this level of automation. Not saying it can’t be done, just skeptical of how well it would work in practice.
Comment on CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant
Pika@sh.itjust.works 2 days agoThe scary part is how it already somewhat is.
My friend is currently job hunting because they added AI to their flow and it does everything past the initial issue report.
the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.
Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”
ns1@feddit.uk 2 days ago
Pika@sh.itjust.works 1 day ago
That was my general thought process prior to them telling me how the system worked as well. I had seen claude workflows which does similar, but to that level I had not seen before. It was an eye opener.
dreamkeeper@literature.cafe 2 days ago
There’s absolutely no way this can be effective for anything other than simple changes in each PR.
Pika@sh.itjust.works 1 day ago
I’ll have to ask them how effective it is now that its been deployed for a bit. I wouldn’t expect so either based off how I’ve seen open sourced projects using stuff like that, but they also haven’t been complaining about it screwing up at all.
dreamkeeper@literature.cafe 16 hours ago
I found out that some teams at my company are doing the same thing. They’re using it to fix simple issues like exceptions and security issues that don’t need many code changes. I’d be shocked if it were any different at your friend’s company. It’s just surprising to me that that’s all he was doing?
LLMs can be very effective but if I’m writing complex code with them, they always require multiple rounds of iteration. They just can’t retain enough context or maintain it accurately without making mistakes.
I think some clever context engineering can help with that, but at the end of the day it’s a known limitation of LLMs. They’re really good at doing text-based things faster than we can, but the human brain just has an absolutely enormous capacity for storing information.
PrejudicedKettle@lemmy.world 2 days ago
I feel like so much LLM-generated code is bound to deteriorate code quality and blow up the context size to such an extent that the LLM is eventually gonna become paralyzed
Pika@sh.itjust.works 2 days ago
I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.
They told me their main concern is that they aren’t sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it’s their job to give final say of “yes this is good”
WanderingThoughts@europe.pub 2 days ago
At that point your m they’re just the responsibility circuit breaker, put there to get the blame if things go wrong.