Not exactly. It would mean it isn’t better than humans, so the only real metric for adopting it or not would be the cost. And considering it would require a human to review the code and fix the bugs anyway, I’m not sure the ROI would be that good in such case. If it was like, twice as good as an average developer, the ROI would be far better.
If, hypothetically, the code had the same efficacy and quality as human code, then it would be much cheaper and faster. Even if it was actually a little bit worse, it still would be amazingly useful.
My dishwasher sometimes doesn’t fully clean everything, it’s not as strong as a guarantee as doing it myself. I still use it because despite the lower quality wash that requires some spot washing, I still come out ahead.
Now this was hypothetical, LLM generated code is damn near useless for my usage, despite assumptions it would do a bit more. But if it did generate code that matched the request with comparable risk of bugs compared to doing it myself, I’d absolutely be using it. I suppose with the caveat that I have to consider the code within my ability to actual diagnose problems too…
One’s dishwasher is not exposed to a harsh environment. A large percentage of code is exposed to an openly hostile environment.
If a dishwasher breaks, it can destroy a floor, a room, maybe the rooms below. If code breaks it can lead to the computer, then network, being compromised. Followed by escalating attacks that can bankrupt a business and lead to financial ruin. (This is possibly extreme, but cyber attacks have destroyed businesses. The downside risks of terrible code can be huge.)
Yes, but just like quality, the people in charge of money aren’t totally on top of security either. They just see superficially convincing tutorial fodder and start declaring they will soon be able to get rid of all those pesky people. Even if you convince them a human does it better, they are inclined to think ‘good enough for the price’.
So you can’t say “it’s no better than human at quality” and expect those people to be discouraged, it has to be pointed out how wildly off base it is.
Human coder here. First problem: define what is “writing code.” Well over 90% of software engineers I have worked with “write their own code” - but that’s typically less (often far less) than 50% of the value they provide to their organization. They also coordinate their interfaces with other software engineers, capture customer requirements in testable form, and above all else: negotiate system architecture with their colleagues to build large working systems.
So, AI has written 90% of the code I have produced in the past month. I tend to throw away more AI code than the code I used to write by hand, mostly because it’s a low-cost thing to do. I wish I had the luxury of time to throw away code like that in the past and start over. What AI hasn’t done is put together working systems of any value - it makes nice little microservices. If you architect your system as a bunch of cooperating microservices, AI can be a strong contributor on your team. If you expect AI to get any kind of “big picture” and implement it down to the source code level - your “big picture” had better be pretty small - nothing I have ever launched as a commercially viable product has been that small.
Writing code / being a software engineer isn’t like being a bricklayer. Yes, AI is laying 90% of our bricks today, but it’s not showing signs of being capable of designing the buildings, or even evaluating structural integrity of something taller than maybe 2 floors.
Dremor@lemmy.world 1 day ago
Not exactly. It would mean it isn’t better than humans, so the only real metric for adopting it or not would be the cost. And considering it would require a human to review the code and fix the bugs anyway, I’m not sure the ROI would be that good in such case. If it was like, twice as good as an average developer, the ROI would be far better.
jj4211@lemmy.world 1 day ago
If, hypothetically, the code had the same efficacy and quality as human code, then it would be much cheaper and faster. Even if it was actually a little bit worse, it still would be amazingly useful.
My dishwasher sometimes doesn’t fully clean everything, it’s not as strong as a guarantee as doing it myself. I still use it because despite the lower quality wash that requires some spot washing, I still come out ahead.
Now this was hypothetical, LLM generated code is damn near useless for my usage, despite assumptions it would do a bit more. But if it did generate code that matched the request with comparable risk of bugs compared to doing it myself, I’d absolutely be using it. I suppose with the caveat that I have to consider the code within my ability to actual diagnose problems too…
MNByChoice@midwest.social 1 day ago
One’s dishwasher is not exposed to a harsh environment. A large percentage of code is exposed to an openly hostile environment.
If a dishwasher breaks, it can destroy a floor, a room, maybe the rooms below. If code breaks it can lead to the computer, then network, being compromised. Followed by escalating attacks that can bankrupt a business and lead to financial ruin. (This is possibly extreme, but cyber attacks have destroyed businesses. The downside risks of terrible code can be huge.)
jj4211@lemmy.world 1 day ago
Yes, but just like quality, the people in charge of money aren’t totally on top of security either. They just see superficially convincing tutorial fodder and start declaring they will soon be able to get rid of all those pesky people. Even if you convince them a human does it better, they are inclined to think ‘good enough for the price’.
So you can’t say “it’s no better than human at quality” and expect those people to be discouraged, it has to be pointed out how wildly off base it is.
MangoCats@feddit.it 1 day ago
Human coder here. First problem: define what is “writing code.” Well over 90% of software engineers I have worked with “write their own code” - but that’s typically less (often far less) than 50% of the value they provide to their organization. They also coordinate their interfaces with other software engineers, capture customer requirements in testable form, and above all else: negotiate system architecture with their colleagues to build large working systems.
So, AI has written 90% of the code I have produced in the past month. I tend to throw away more AI code than the code I used to write by hand, mostly because it’s a low-cost thing to do. I wish I had the luxury of time to throw away code like that in the past and start over. What AI hasn’t done is put together working systems of any value - it makes nice little microservices. If you architect your system as a bunch of cooperating microservices, AI can be a strong contributor on your team. If you expect AI to get any kind of “big picture” and implement it down to the source code level - your “big picture” had better be pretty small - nothing I have ever launched as a commercially viable product has been that small.
Writing code / being a software engineer isn’t like being a bricklayer. Yes, AI is laying 90% of our bricks today, but it’s not showing signs of being capable of designing the buildings, or even evaluating structural integrity of something taller than maybe 2 floors.