Right it can totally do that safely and axcurately despite not being able to count the Rs in strawberry.
Comment on The Copilot Delusion
Lembot_0002@lemm.ee 1 week ago
AI is the best thing that happened to us for ages: now we can do whatever we do without the pain and humiliation of spending enormous amount of time seeking through some shitty documentation or, in too many cases, straightforwardly bruteforcing the libs by guessing what the fuck parameters this or that function needs.
Now I can just ask an AI if there is a method in this class that does something I need and receive a useful answer, not a RTFM like in the times you’re so fond of.
Olgratin_Magmatoe@slrpnk.net 1 week ago
dataprolet@lemmy.dbzer0.com 1 week ago
I’d say both is true. If I need a quick meal I’m glad I can just order something ready-made, but I also enjoy to cook an intricate meal for hours. OP is maybe worried that people forget about the latter and only prefer the ready-made solution.
cabbage@piefed.social 1 week ago
I think chapter 2 does a good job presenting the advantages.
Maybe you inherited someone else’s codebase. A minefield of nested closures, half-commented hacks, and variable names like d and foo. A mess of complex OOPisms, where you have to traverse 18 files just to follow a single behaviour. You don’t have all day. You need a flyover—an aerial view of the warzone before you land and start disarming traps.
Ask Copilot: "What’s this code doing?"
It won’t be poetry. It won’t necessarily provide a full picture. But it’ll be close enough to orient yourself before diving into the guts.So—props where props are due. Copilot is like a greasy, high-functioning but practically poor intern:
- Great with syntax
- Surprisingly quick at listing out your blind spots.
- Good at building scaffolding if you feed it the exact right words.
- Horrible at nuance.
- Useless without supervision.
- Will absolutely kill you in production if left alone for 30 seconds.
kibiz0r@midwest.social 1 week ago
So if library users stop communicating with each other and with the library authors, how are library authors gonna know what to do next? Unless you want them to talk to AIs instead of people, too.
At some point, when we’ve disconnected every human from each other, will we wonder why? Or will we be content with the answer “efficiency”?
Valmond@lemmy.world 1 week ago
That was why it was so entertaining, getting a lil homebrew to run on the Nintendo DS was fun.
Kyrgizion@lemmy.world 1 week ago
Yes, as long as the information you get from the AI is correct. Which we know is absolutely not the case. That is the issue. If AI’s output could be trusted 100% things would be wildly different.
jyl@sopuli.xyz 1 week ago
Unlike vibe coding, asking an LLM how to access some specific thing in a library when you’re not even sure what to look for is a legitimate use case.
IsoKiero@sopuli.xyz 1 week ago
You’re not wrong, but my personal experience is that it can also lead you down in a pretty convincing but totally wrong direction. I’m not a professional coder, but have at least some experience and I’ve tried the LLM approach on trying to figure out which library/command set/whatever I should use for problem at hand. Sometimes it gives useful answers, sometimes it’s totally wrong which is easy to spot and at worst it gives you something which (at least to me) seems like it could work. And on the last case I then spend more or less time figuring out how to use the thing it proposed, fail, eventually read the actual old fashioned documentation and notice that the proposed solution is somewhat related to my problem but totally wrong.
And on that point I would have actually saved time if I did things the old fashion way (which is getting more and more annoying as search engines get worse and worse). There’s legitimate use cases too of course, but you really need to have at least some idea on what you’re doing to evaluate the answers LLMs give you.
jyl@sopuli.xyz 1 week ago
Yeah, I guess that can happen. For me, it has saved much more time than it has wasted, but I’ve only used it on relatively popular libraries with stable apis, and don’t ask for complex things.
dustyData@lemmy.world 1 week ago
Until it gives you a list of books and two thirds don’t exist and the rest aren’t even in the library.
jyl@sopuli.xyz 1 week ago
The worst I’ve got so far hasn’t been hallucinated “books”, but stuff like functions from a previous major version of the api mixed in.
I don’t think it’s unreasonable to use an LLM as a documentation search engine, but I guess this opinion lost the popular vote.
Kyrgizion@lemmy.world 1 week ago
I’ve had great success with using ChatGPT to diagnose and solve hardware issues. There’s plenty of legitimate use cases. The problem remains that if you ask it for information about something, the only to be sure it’s correct is to actually know what you’re asking about. Anyone without at least passing knowledge of the subject will assume the info they get is correct, which will be the case most of the time, but not always. And in fields like security or medicine, such a small issue could easily have dire ramifications.
jyl@sopuli.xyz 1 week ago
If you don’t know what the code is doing, you’re vibe coding. The point is to not waste time searching. Obviously you’re supposed to check the docs yourself, but that’s much less tedious time consuming than finding it.