Comment on I Went All-In on AI. The MIT Study Is Right.
PoliteDudeInTheMood@lemmy.ca 5 days agoI think it really depends on the user and how you communicate with the AI. People are different, and we communicate differently. But if you’re precise and you tell it what you want, and what your expected result should be it’s pretty good at filling in the blanks.
I can pull really useful code out of Claude, but ask me to think up a prompt to feed into Gemini for video creation and they look like shit.
jj4211@lemmy.world 5 days ago
The type of problem in my experience is the biggest change in problem.
Ask for something that is consistent with very well trodden territory, and it has a good shot. However if you go off the beaten path, and it really can’t credibly generate code, it generates anyway, making up function names, file paths, rest urls and attributes, and whatever else that would sound good and consistent with the prompt, but no connection to real stuff.
It’s usually not that that it does the wrong thing because it “misunderstood”, it is usually that it producea very appropriate looking code consistent with the request that does not have a link to reality, and there’s no recognition of when it invented non existent thing.
If it’s a fairly milquetoast web UI manipulating a SQL backend, it tends to chew through that more reasonably (though in various results that I’ve tried it screwed up a fundamental security principle, like once I saw it suggest a weird custom certificate validation and disable default validation while transmitting sensitive data before trying to meaningfully execute the custom valiidation.