Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“
The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING - it’s literally just a pile of words
Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“
The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING - it’s literally just a pile of words
Electricd@lemmybefree.net 4 days ago
Thinking model can realize their prediction doesn’t make sense to an extent but yea, it’s not always accurate