Comment on Trolling chatbots with made-up memes

<- View Parent
Nougat@kbin.social ⁨1⁩ ⁨year⁩ ago

I would say the specific shortcoming being demonstrated here is the inability for LLMs to determine whether a piece of information is factual (not that they're even dealing with "pieces of information" like that in the first place). They are also not able to tell whether a human questioner is being truthful, or misleading, or plain lying, honestly mistaken, or nonsensical. Of course, which one of those is the case matters in a conversation which ought to have its basis in fact.

source
Sort:hotnewtop