And what happens when mechahitler the next version of Grok or whatever AI hosted by a large corporation that only has the interest of capital gains comes out with unannounced injected prompt poisoning that doesn’t produce quality output like you’ve been conditioned to expect?
These AI are good if you have a general grasp of whatever you are trying to find, because you can easily pick out what you know to be true and what is obviously a ridiculous mess of computer generated text that is no smarter than your phone keyboard word suggestions AI hallucination.
Trying to soak up all the information generated by AI in a topic without prior knowledge may easily end up with you not understanding anything more than you did before, and possibly give you unrealistic confidence that you know what is essentially misinformation. And just because an AI pulls up references, unless you do your due diligence to read those references for accuracy or authority on the subject, the AI may be hallucinating where it got the wrong information it’s giving you.
isVeryLoud@lemmy.ca 3 days ago
Ok, I didn’t need you as a middle man to tell me what the LLM just hallucinated, I can do this myself.
The point is that raw AI output provides absolutely no value to a conversation, and is thus noisy and rude.
When we ask questions on a public forum, we’re looking to talk to people about their own experience and research through the lens of their own being and expertise. We’re all capable of prompting an AI agent. If we wanted AI answers, we’d prompt an AI agent.