If, hypothetically, the code had the same efficacy and quality as human code, then it would be much cheaper and faster. Even if it was actually a little bit worse, it still would be amazingly useful.
My dishwasher sometimes doesn’t fully clean everything, it’s not as strong as a guarantee as doing it myself. I still use it because despite the lower quality wash that requires some spot washing, I still come out ahead.
Now this was hypothetical, LLM generated code is damn near useless for my usage, despite assumptions it would do a bit more. But if it did generate code that matched the request with comparable risk of bugs compared to doing it myself, I’d absolutely be using it. I suppose with the caveat that I have to consider the code within my ability to actual diagnose problems too…
MNByChoice@midwest.social 1 day ago
One’s dishwasher is not exposed to a harsh environment. A large percentage of code is exposed to an openly hostile environment.
If a dishwasher breaks, it can destroy a floor, a room, maybe the rooms below. If code breaks it can lead to the computer, then network, being compromised. Followed by escalating attacks that can bankrupt a business and lead to financial ruin. (This is possibly extreme, but cyber attacks have destroyed businesses. The downside risks of terrible code can be huge.)
jj4211@lemmy.world 1 day ago
Yes, but just like quality, the people in charge of money aren’t totally on top of security either. They just see superficially convincing tutorial fodder and start declaring they will soon be able to get rid of all those pesky people. Even if you convince them a human does it better, they are inclined to think ‘good enough for the price’.
So you can’t say “it’s no better than human at quality” and expect those people to be discouraged, it has to be pointed out how wildly off base it is.