Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
As far as I can tell from the article, the definition of “smarter” was left to the respondents, and “answers as if it knows many things that I don’t know” is certainly a reasonable definition – even if you understand that, technically speaking, an LLM doesn’t know anything.
As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty “smart”:
what’s a good word to describe the people in a poll who answer the questions? I didn’t want to use “subjects” because that could get confused with the topics covered in the poll.
“Respondents” is a good choice. It clearly refers to the people answering the questions without ambiguity.
The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.
Fizz@lemmy.nz 1 year ago
Even if an ai has access to more facts and information you should feel confident in your human ability to reason through the data you do know, search new information and process it in the context.
If you think an ai does all this better than you then you need to try harder.