Comment on The Copilot Delusion
Kyrgizion@lemmy.world 2 weeks agoYes, as long as the information you get from the AI is correct. Which we know is absolutely not the case. That is the issue. If AI’s output could be trusted 100% things would be wildly different.
jyl@sopuli.xyz 2 weeks ago
Unlike vibe coding, asking an LLM how to access some specific thing in a library when you’re not even sure what to look for is a legitimate use case.
IsoKiero@sopuli.xyz 2 weeks ago
You’re not wrong, but my personal experience is that it can also lead you down in a pretty convincing but totally wrong direction. I’m not a professional coder, but have at least some experience and I’ve tried the LLM approach on trying to figure out which library/command set/whatever I should use for problem at hand. Sometimes it gives useful answers, sometimes it’s totally wrong which is easy to spot and at worst it gives you something which (at least to me) seems like it could work. And on the last case I then spend more or less time figuring out how to use the thing it proposed, fail, eventually read the actual old fashioned documentation and notice that the proposed solution is somewhat related to my problem but totally wrong.
And on that point I would have actually saved time if I did things the old fashion way (which is getting more and more annoying as search engines get worse and worse). There’s legitimate use cases too of course, but you really need to have at least some idea on what you’re doing to evaluate the answers LLMs give you.
jyl@sopuli.xyz 2 weeks ago
Yeah, I guess that can happen. For me, it has saved much more time than it has wasted, but I’ve only used it on relatively popular libraries with stable apis, and don’t ask for complex things.
dustyData@lemmy.world 2 weeks ago
Until it gives you a list of books and two thirds don’t exist and the rest aren’t even in the library.
jyl@sopuli.xyz 2 weeks ago
The worst I’ve got so far hasn’t been hallucinated “books”, but stuff like functions from a previous major version of the api mixed in.
I don’t think it’s unreasonable to use an LLM as a documentation search engine, but I guess this opinion lost the popular vote.
Kyrgizion@lemmy.world 2 weeks ago
I’ve had great success with using ChatGPT to diagnose and solve hardware issues. There’s plenty of legitimate use cases. The problem remains that if you ask it for information about something, the only to be sure it’s correct is to actually know what you’re asking about. Anyone without at least passing knowledge of the subject will assume the info they get is correct, which will be the case most of the time, but not always. And in fields like security or medicine, such a small issue could easily have dire ramifications.
jyl@sopuli.xyz 2 weeks ago
If you don’t know what the code is doing, you’re vibe coding. The point is to not waste time searching. Obviously you’re supposed to check the docs yourself, but that’s much less tedious time consuming than finding it.