Comment on We have to stop ignoring AI’s hallucination problem
FarceOfWill@infosec.pub 4 months agoThey’re really, really bad at context. The main failure case isn’t making things up, it’s having text or image in part of the result not work right with text or image in another part because they can’t even manage context across their own replies.
See images with three hands, where bow strings mysteriously vanish etc.
FierySpectre@lemmy.world 4 months ago
New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently… So you can give whole datasets or books as context and ask questions about them.