Lots of legitimate concerns and issues with AI, but if you’re going to criticize someone saying they used it you should at least understand how it works so your criticism is applicable.
It is useful. Chatgpt performs web searches, then summarizes the results in a way customized to what you asked it. It skips the step where you have to sift through a bunch of results and determine “is this what I was looking for?” and “how does this apply to my specific context?”
Of course it can and does still get things wrong. It’s crazy to market it as a new electronic god. But it’s not random, and it’s right the majority of the time.
Stillwater@sh.itjust.works 2 days ago
It might be wrong more often than you think
futurism.com/study-ai-search-wrong
thedruid@lemmy.world 2 days ago
IS wrong
Ftfy
hisao@ani.social 2 days ago
In this study they asked to replicate 1:1 headline publisher and date. So for example if AI rephrased headline as something synonymous it would be considered at least partially incorrect. Summarization doesn’t require accurate citation, so it needs a separate study.
Stillwater@sh.itjust.works 2 days ago
OK but google (or ask your AI?) about AI scurvy. This isn’t the only source saying theres a problem with the answers.
LesserAbe@lemmy.world 2 days ago
Besides the other commenter highlighting the specific nature of the linked study, I will say I’m generally doing technical queries where if the answer is wrong, it’s apparent because the AI suggestion doesn’t work. Think “how do I change this setting” or “what’s wrong with the syntax in this line of code”. If I try the AI’s advice and it doesn’t work, then I ask again or try something else.
I would be more concerned about subjects where I don’t have any domain knowledge whatsoever, and not working on a specific application of knowledge, because then it could be a long while before I realize the response was wrong.