And most devs I know use it everyday, so… 🤷
Especially for repetitive mundane code, like they said. It’s much faster to check code for correctness than it is to write it in the first place.
“I need to restructure this directory tree. If a file has “index” in the name, then it has to go in a parallel directory structure starting at “/home/repos/project/indexes/” with the same child folders as the original.”
There, I just finished a custom Python script to accomplish that. Can I do it myself? Yes. Can I do it in 30 seconds? No. Why would I waste my time writing such a mundane script for a one-off thing?
hypna@lemmy.world 3 weeks ago
I’d be interested in some proper studies, but most of the devs I know, myself included, use it for reference at least. Haven’t met a vibe coder yet though.
SpaceNoodle@lemmy.world 3 weeks ago
I tried using it as reference, but it lied more than the datasheets.
hypna@lemmy.world 3 weeks ago
Yeah it’s not a miracle, but it’s probably useful. I find the most common scenario for when the LLM wasted my time was when I was asking it how to do something which can’t be done. Like I would ask it how to use library X to do operation Y, where in truth library X doesn’t support operation Y. Rather than responding that I should find a different library, it would just make up some functions or parameters. When it works well, it’s faster than hunting down the docs or finding examples/tutorials.
SpaceNoodle@lemmy.world 3 weeks ago
Or you could just bookmark the documentation
skulblaka@sh.itjust.works 3 weeks ago
In my left hand, I have a manfile, written by the very same people who wrote the tool or language that I’m trying to use. It is concise, contains true information, and won’t change if I look up the same thing again later.
In my right hand, I have a pathological liar, who also kinda sorta read the manfile and then smooshed it together with 20 other manuals.
I wonder which of these options is a more reliable reference tool for me? Hmm. It’s difficult to tell.
sturger@sh.itjust.works 2 weeks ago
I’ve started using an AI driver for my car. And by “AI” I mean I use a bungee cord on the steering wheel to keep it straight. Straight is the correct answer 40% of the time, so it works out.
Oh, and by “my car”, I mean the people that work for me. I insist that they use my bungee-cord idea to steer their cars if they want to work for me. There may be a few losses, but that’s ok. I can always fire the ones that die and hire more.
I’m a genius.
8uurg@lemmy.world 2 weeks ago
In my experience that is not necessarily guaranteed, documentation is sometimes not updated and the information may be outdated or may even be missing entirely.
Documentation is much more reliable, yes, but not necessarily always true or complete, sadly enough.
skulblaka@sh.itjust.works 2 weeks ago
Sure, and I’ve also had my share of cursing at poor documentation.
If that’s the case then your AI is also going to struggle to give you usable information though.
okwhateverdude@lemmy.world 3 weeks ago
I mostly vibecode throw away shit. I am not shipping this python script that is resizing and then embedding images into this .xls. Or the simple static html/css generator because hosting a full blown app is overkill when I just wanna show something to some non-tech colleagues. Stuff that would take half, to an hour to throw together now takes like 5-10min. I wouldn’t trust it to do anything more complicated because it fucks up all the time, leans too heavily on its training data instead of referencing docs and it is way too confident about shit when it is wrong. Pro-tip, berate the slop machines. They perform better and stop being so god damn sycophantic when you do. I am a divine being of consciousness and considerable skill, and it is a slop machine: useful, but beneath me.
mos@lemmy.world 3 weeks ago
That last line is hillarious. I’ll remember that. but also the robots will remember this post when they take over.
MotoAsh@lemmy.world 2 weeks ago
“…leans too heavily on its training data…” No, it IS its training data. Full srop. It doesn’t know the documentation as a separate entity. It doesn’t reason what so ever for where to get its data from. It just shits out the closest approximation of an “acceptable” answer from the training data. Period. It doesn’t think. It doesn’t reason. It doesn’t decide where to pull an answer from. It just shits it out verbatim.
I swear… so many people anthropomorphise “AI” it’s ridiculous. It does not think and it does not reason. Ever. Thinking it does is projecting human attributes on to it, which is anthropomorphizing it, which is lying to yourself about it.
okwhateverdude@lemmy.world 2 weeks ago
Ackually 🤓, gemini pro and other similar models are basically a loop over some metaprompts with tool usage including using search. It will actually reference/cite documentation if given explicit instructions. You’re right, the anthropomorphization is troubling. That said, the simulacrum presented DOES follow directions and it’s (meaning the complete system of LLM + looped prompts) behavior can be interpreted as having some kind of agency. We’re on the same side, but you’re sorely misinformed, friend.