this post was submitted on 17 Mar 2025
574 points (96.9% liked)

Technology

66892 readers
4886 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
(page 4) 50 comments
sorted by: hot top controversial new old
[–] Th4tGuyII@fedia.io 32 points 2 days ago

LLMs are made to mimic how we speak, and some can even pass the Turing test, so I'm not surprised that people who don't know better think of these LLMs as conscious in some way or another.

It's not a necessarily a fault on those people, it's a fault on how LLMs are purposefully misadvertised to the masses

[–] Arkouda@lemmy.ca 32 points 2 days ago (3 children)

"Nearly half" of US citizens are right, because about 75% of the US population is functionally or clinically illiterate.

[–] bizarroland@fedia.io 14 points 2 days ago (10 children)

I think the specific is that 40% of adult Americans can't read at a seventh grade level.

Probably because they stopped teaching etymology in schools, So now many Americans do not know how to break a word down into its subjugate parts.

load more comments (10 replies)
load more comments (2 replies)
[–] kipo@lemm.ee 12 points 1 day ago* (last edited 1 day ago) (1 children)

No one has asked so I am going to ask:

What is Elon University and why should I trust them?

load more comments (1 replies)
[–] cmhe@lemmy.world 1 points 1 day ago

I suppose some of that comes down to the personal understanding of what "smart" is.

I guess you could call some person, that doesn't understand a topic, but still manages to sound reasonable when talking about it, and might even convince people that they actually have a deep understanding of that topic, "smart", in a kind of "smart imposter".

[–] Traister101@lemmy.today 8 points 1 day ago

While this is pretty hilarious LLMs don't actually "know" anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps "words" to other "words" to allow a computer to understand language. IE all an LLM knows is that when it sees "I love" what probably comes next is "my mom|my dad|ect". Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at "answering" a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.

[–] bjoern_tantau@swg-empire.de 20 points 2 days ago

I know enough people for whom that's true.

[–] MITM0@lemmy.world 2 points 1 day ago

Why are you even surprised at this point, when it comes to Americans ?

[–] Montreal_Metro@lemmy.ca 7 points 1 day ago

There’s a lot of ignorant people out there so yeah, technically LLM is smarter than most people.

[–] Dindonmasker@sh.itjust.works 10 points 2 days ago (1 children)

I don't think a single human who knows as much as chatgpt does exists. Does that mean chatgpt is smarter then everyone? No. Obviously not based on what we've seen so far. But the amount of information available to these LLMs is incredible and can be very useful. Like a library contains a lot of useful information but isn't intelligent itself.

[–] kameecoding@lemmy.world 4 points 1 day ago

That's pretty weak reasoning, by your own words, it isn't intellignt, it doesnt know anything.

By that logic wikipedia is also smarter than any human because it has lot of knowledge.

[–] AbnormalHumanBeing@lemmy.abnormalbeings.space 11 points 2 days ago (2 children)

I wouldn't be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.

[–] singletona@lemmy.world 5 points 1 day ago

It's a one trick pony.

That trick also happens to be a really neat trick that can make people think it's a swiss army knife instead of a shovel.

load more comments (1 replies)
[–] avidamoeba@lemmy.ca 8 points 2 days ago* (last edited 2 days ago)

Just a thought, perhaps instead of considering the mental and educational state of the people without power to significantly affect this state, we should focus on the people who have power.

For example, why don't LLM providers explicitly and loudly state, or require acknowledgement, that their products are just imitating human thought and make significant mistakes regularly, and therefore should be used with plenty of caution?

It's a rhetorical question, we know why, and I think we should focus on that, not on its effects. It's also much cheaper and easier to do than refill years of quality education in individuals heads.

[–] EncryptKeeper@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

The funny thing about this scenario is by simply thinking that’s true, it actually becomes true.

[–] transMexicanCRTcowfart@lemmy.world 8 points 2 days ago* (last edited 2 days ago)

Aside from the unfortunate name of the university, I think that part of why LLMs may be perceived as smart or 'smarter' is because they are very articulate and, unless prompted otherwise, use proper spelling and grammar, and tend to structure their sentences logically.

Which 'smart' humans may not do, out of haste or contextual adaptation.

[–] Retropunk64@lemmy.world 0 points 1 day ago (2 children)
load more comments (2 replies)
[–] Fubarberry@sopuli.xyz 7 points 2 days ago* (last edited 2 days ago)

I wasn't sure from the title if it was "Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are." or "Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are." It's the former, although you could probably argue the latter is true too.

Either way, I'm not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn't answer and get a decent answer. I feel like they're generally good at meeting what people's expectations are of a "smart person", even if they have major shortcomings in other areas.

[–] 1984@lemmy.today 4 points 1 day ago (1 children)

An llm simply has remembered facts. If that is smart, then sure, no human can compete.

Now ask an llm to build a house. Oh shit, no legs and cant walk. A human can walk without thinking about it even.

In the future though, there will be robots who can build houses using AI models to learn from. But not in a long time.

[–] Omgpwnies@lemmy.world 3 points 1 day ago (1 children)

3d-printed concrete houses are already a thing, there's no need for human-like machines to build stuff. They can be purpose-built to perform whatever portion of the house-building task they need to do. There's absolutely no barrier today from having a hive of machines built for specific purposes build houses, besides the fact that no-one as of yet has stitched the necessary components together.

It's not at all out of the question that an AI can be trained up on a dataset of engineering diagrams, house layouts, materials, and construction methods, with subordinate AIs trained on the specific aspects of housing systems like insulation, roofing, plumbing, framing, electrical, etc. which are then used to drive the actual machines building the house. The principal human requirement at that point would be the need for engineers to check the math and sign-off on a design for safety purposes.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›