Here’s the thing, the LLM isn’t recalling and presenting pieces of information. It’s creating human-like strings of words. It will give you a human-like phrase based on whatever you tell it. Chatbots like ChatGPT are fine tuned to try to filter what they say to be more helpful and truthful but at it’s core it just takes what you say and makes human-like phrases to match.
Comment on Trolling chatbots with made-up memes
Nougat@kbin.social 1 year agoI would say the specific shortcoming being demonstrated here is the inability for LLMs to determine whether a piece of information is factual (not that they're even dealing with "pieces of information" like that in the first place). They are also not able to tell whether a human questioner is being truthful, or misleading, or plain lying, honestly mistaken, or nonsensical. Of course, which one of those is the case matters in a conversation which ought to have its basis in fact.
PhantomPhanatic@lemmy.world 1 year ago
Nougat@kbin.social 1 year ago
(not that they're even dealing with "pieces of information" like that in the first place)
csfirecracker@lemmyf.uk 1 year ago
Thank you for putting it far more eloquently than I could have
Moobythegoldensock@lemm.ee 1 year ago
Indeed, and all it takes is one lie to send it down that road.
For example, I asked ChatGPT how to teach my cat to ice skate, with predictable admonishment:
But after I reassured it that my cat loves ice skating, it changed its tune:
Even after telling it I lied and my cat doesn’t actually like ice skating, its acceptance of my previous lie still affected it:
HarkMahlberg@kbin.social 1 year ago
This is a great example of how to deliberately get it to go off track. I tried to get it to summarize the Herman Cain presidency, and it kept telling me Herman Cain was never president.
Then I got it to summarize a made-up reddit meme.
When I asked about President Herman Cain AFTER Boron Pastry, it came up with this:
Moobythegoldensock@lemm.ee 1 year ago
He did run for president in 2012 with the 999 plan, though.
HarkMahlberg@kbin.social 1 year ago
Right, and to my knowledge everything else said about President Herman Cain is correct - Godfather's Pizza, NRA, sexual harassment, etc.
But notice... I keep claiming that Cain was President, and the bot didn't correct me. It didn't just respond with true information, it allowed false information to stand unchallenged. What I've effectively done is shown AI's inability to handle a firehose of falsehood. Humans already struggle with dealing this kind of disinformation campaign, now imagine that you could use AI to automate the generation and/or dissemination of misinformation.