Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
The average literacy level is around that of a sixth grader.
This tracks
Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.
I believe LLMs are smarter than half of US adults
That is the problem with US adults. Half of them probably is dumber than AI…
The grammatical error here is chef’s kiss.
That’s called a self-proving statement.
If we are talking about American adults, I guess they might be right.
Hallucination comes off as confidence. Very human like behavior tbh.
Do the other half believe it is dumber than it actually is?
It’s like asking if you think a calculator is smarter than you.
„It‘s totally a lot smarter than I am, no way could I deliver (234 * 534)^21 as confidently!“
Are you suggesting my 90’s calculator is smarter than LLM’s?
Hard to compete with that 90s confidence 😎
"Half of LLM users " beleive this. Which is not to say that people who understand how flawed LLMs are, or what their actual function is, do not use LLMs and therefore arent i cluded in this statistic?
This is kinda like saying ‘60% of people who pay for their daily horoscope beleive it is an accurate prediction’.Why are you even surprised at this point, when it comes to Americans ?
You say this like this is wrong.
Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.
Memory isn’t intelligence.
Then asking it a logic question. What question are you asking that the llms are getting wrong and your average person is getting right? How are you proving intelligence here?
“Think of how stupid the average person is, and realize half of them are stupider than that.” ― George Carlin
They’re right
Wow. Reading these comments so many people here really don’t understand how LLMs work or what’s actually going on at the frontier of the field.
I feel like there’s going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.
AKA “future shock”