My gut response is that everyone understands that the models aren’t sentient and hallucination is short hand for the false information that llms inevitably and apparently inescapably produce. But taking a step back you’re probably right, for anyone who doesn’t understand the technology it’s a very anthropomorphic term which adds to the veneer of sentience.
Comment on Gemini lies to user about health info, says it wanted to make him feel better
MolochHorridus@lemmy.ml 1 day agoThere is no hallucination problems, just design flaws and errors.
FancyPantsFIRE@lemmy.world 22 hours ago
draco_aeneus@mander.xyz 1 day ago
It’s not really even errors. It is well-suited for what it was designed. It produced pretty good text. It’s just that we’re using it for stuff it’s not suited for. Like digging a hole with a spoon, then complaining your hands hurt.
silverneedle@lemmy.ca 21 hours ago
It’s a convenient way of looking at things. Saying that it’s good at one thing and bad at others. What I have come to realize with LLMs is that anywhere where experts deal with them, they are very aware of their shortcomings with respect to someone’s area of expertise. Sure, you might say they’re good at producing text, yet a journalist or someone who simply writes a ton might be able to spot AI generated text in an instant. The same way a photographer or painter can spot AI instantly. Rinse and repeat for coding, translation, medicine and all other socially constructed objects. That is not to say that you need to be an expert to spot LLMs or other generative ANNs, it comes down to attention and what you condition yourself to be attentive to. Of course pictures or code, or whatever will be convincing if you treat these things as secondary, like a doctor would treat creative writing as secondary to their job though necessary or a biologist would treat writing python scripts.
Iconoclast@feddit.uk 20 hours ago
But that’s exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be “superhuman” at one specific task - like generating natural-sounding language - but that doesn’t automatically carry over to other tasks.
People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That’s a pure side effect of their training - not something they were ever designed to do.
It’s like cruise control that’s also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it’s still just cruise control, not a full autopilot.
silverneedle@lemmy.ca 15 hours ago
What does this word mean? Does this refer to something that does not exist? If so why are we using it as a practical benchmark or distinction to make statements about the world?
My text compression algorithm for tape gets the facts right to the exact character. Beat that.