It seems like it could check for that though, which is what chatgpt doesn’t do but we all assumed would. I’m sure there are ai programs that could and do check for possibilities on only information we know to be true.
It doesn’t check the stuff it generates other than on grammatical and orthographical errors. It’s not intelligent or has knowledge outside of how to create text. The text looks useful, but it doesn’t know what it contains in a way something intelligent would.
PeleSpirit@lemmy.world 1 year ago
fsmacolyte@lemmy.world 1 year ago
Recent papers have shown that LLMs build internal world models but about a topic as niche and complicated as cancer treatment, a chatbot based on GPT-3.5 be woefully ill-equipped to do any kind of proper reasoning.