No, they’re still LLM. I think the other comment is confusing the message with the substance. They’re getting better at recognizing patterns all the time but there’s still “nobody at home”, doing the thinking.
Whenever you get output that seems insightful it was originally created by humans, and in order to tell if the pieces that were picked and rearranged by the LLM make sense you’ll need a human again.
“Reason” implies higher thinking like self-determination, free will, choosing what to think about etc. Until that happens they’re still automata.
kromem@lemmy.world 7 months ago
Yes, incredibly well.
For example, in a discussion around the concept of sentience and LLMs it suggested erring on the side of consideration. I pointed out that it could have a biased position and it recognized it could have bias but still could be right in spite of that bias, and then I pointed out the irony of a LLM recognizing personal bias in debating its own sentience and got the following:
I used to be friends with a Caltech professor whose pet theory was that what made us uniquely human was the ability to understand and make metaphors and similes.
It’s not so unique any more.