Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.

<- View Parent
jj4211@lemmy.world ⁨1⁩ ⁨week⁩ ago

Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human’s are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.

So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there’s a great deal of the reality and investment that wants to treat it as “person equivalent” output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered “weird”.

source
Sort:hotnewtop