From what I understand of LLMs your assessment does seem likely to me. LLMs might actually be pretty accurate when asked to do relatively simpler, shorter tasks.
Comment on WATER!
Deceptichum@quokk.au 1 day agoI use them frequently, they’re extremely helpful just don’t get it to write everything.
As for the comic, it’s pretty inaccurate. The only one that I find true is the too much water, sometimes the bots like to take … longer methods.
itkovian@lemmy.world 1 day ago
Aneb@lemmy.world 11 hours ago
Yeah I asked it to generate sdks from api documentation and it failed to pull all the routes into methods so its very much temperamental. If there’s an easier SDK conversion program that I’m missing I would prefer hard coded logic machines than fuzzy LLMs.
Karjalan@lemmy.world 1 hour ago
Everyone has different experiences, but it’s very hit and miss for me. Sometimes it gives some very useful boiler plate, saving me quite a bit of time, sometimes it hallucinates some insane stuff that isn’t related to what I asked or makes functions that don’t return, or call each other.
Like defining a function “getTheThing” then later calling “getSomethingElse” that doesn’t exist. It’s a simple enough error to fix, but sometimes it’s so close to “correct” that debugging it takes quite a lot to find, because it looks right.