I’ve had most success explaining LLM ‘fallibility’ to non-techies using the image gen examples. Google ‘AI hands’, and ask them if they see anything wrong. Now point out that we’re _extremely_sensitive to anything wrong with our hands, and so these are very easy for us to spot. But the AI has no concept of what a hand is, it’s just seen a _lot _ of images from different angles, sometimes fingers are hidden, sometimes intertwined etc. So it will happily generate lots more of those kinds of images, with no regard to whether they could / should actually exists.
It’s a pretty similar idea with the LLMs. It’s seen a lot of text, and can put together words in a convincing-looking way. But it has no concept of what it’s writing, and the equivalent of the ‘hands’ will be there in the text. It’s just that we can’t see them at first glance like we can with the hands.
db2@sopuli.xyz 1 year ago
It would be funny if that comment was ai generated.
solstice@lemmy.world 1 year ago
I read once we shouldn’t be worried when AI starts passing Turing tests, we should worry when they start failing them again 🤣
Kolrami@lemmy.world 1 year ago
I read a physical book about using chatGPT that I’m pretty sure was written by chatGPT.