Comment on We have to stop ignoring AI’s hallucination problem
breakingcups@lemmy.world 6 months agoNo, but they approximate it. Which is fine for most use cases the person you’re responding to described.
Comment on We have to stop ignoring AI’s hallucination problem
breakingcups@lemmy.world 6 months agoNo, but they approximate it. Which is fine for most use cases the person you’re responding to described.
FarceOfWill@infosec.pub 6 months ago
They’re really, really bad at context. The main failure case isn’t making things up, it’s having text or image in part of the result not work right with text or image in another part because they can’t even manage context across their own replies.
See images with three hands, where bow strings mysteriously vanish etc.
FierySpectre@lemmy.world 6 months ago
New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently… So you can give whole datasets or books as context and ask questions about them.