Comment on Why are people seemingly against AI chatbots aiding in writing code?
cley_faye@lemmy.world 1 month ago
- issues with model training sources
- business sending their whole codebase to third party (copilot etc.) instead of local models
- time gain is not that substantial in most case, as the actual “writing code” part is not the part that takes most time, thinking and checking it is
- "chatting" in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you’re half-competent. We’ve known that since customer/developer meetings have existed.
- the dev have to actually be competent enough to review the changes/output. In a way, “peer reviewing” becomes mandatory; it’s long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
- some business thinking that LLM outputs are “good enough”, firing/moving away people that can actually do said review, leading to more issues down the line
- actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
- making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
- using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of “optimized out” in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term
Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:
- sometimes, it does produce useful output in which I only have to edit a few parts to make it works
- local autocomplete is sometimes almost as useful as the regular contextual autocomplete
- the chatbot turning short code into longer “natural language” explanations can sometimes act as a rubber duck in aiding for debugging
Note the “sometimes”. I don’t have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it’s not even a matter of the tool becoming better, it’s the usefulness to begin with that’s uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare “gains”.
confuser@lemmy.zip 1 month ago
a lot of your issues are effeciency related which i think can realistically be solved given some time for development cycles to take hold on ai. if they were better all around to whatever standard you think is sufficiently useful, would you then think it would be useful? the other side related thing too is that if it can get that level of competence in coding then it most likely can get just as competant in a variety of other domains too.
cley_faye@lemmy.world 1 month ago
The point is, they don’t get “competent”. They get better at assembling pieces they were given. And a proper stack with competent developers will already have moved that redundancy out of the codebase. For whatever remains, thinking is the longest part. And LLM can’t improve that once the problem gets a tiny bit complex. Of course, I could end up having a good rough idea of what the code should look like, describe that to an LLM, and have it write actual code with proper variable names and all, but once I reach the point I can describe accurately the thing I want, it’s usually as fast to type it. With the added value that it’s easier to double check.
What remains is providing good insight on new things, and understanding complex requirements. While there is room for improvement, it seems more and more obvious that LLM are not the answer: theoretically, they are not the right tool, and seeing the various level of improvements we’re seeing, they definitely did not prove us wrong. The technology is good at some things, but not at getting “competent”.
Also, you sweep out the privacy and licensing issues, which are big no-no too.
LLM have their uses, I outline some. And in these uses, there are clear rooms for improvements. For reference, the solution I currently use puts me at accepting around 10% of the automatic suggestions. Out of these, I’d say a third needs reworking. Obviously if that moved up to like, 90% suggestions that seems decent and with less need to fix them afterward, it’d be great. Unfortunately, since you can’t trust these, you would still have to review the output carefully, making the whole operation probably not that big of a time saver anyway.
Coding doesn’t allow much leeway. Other activities which allow more leeway for mistakes can probably benefit a lot more. Translation, for example, can be acceptable, in particular because some mishaps may automatically be corrected by readers/listeners. But with code, any single mistake will lead to issues down the way.