this post was submitted on 17 Aug 2023
327 points (97.9% liked)

Technology

59314 readers
5268 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bioemerl@kbin.social -1 points 1 year ago (1 children)

They are huge math formulas with many variables, they can’t think, they can’t apply logic.

And you're a bunch of cells. Neurons can't apply logic either, until you get a few billion in a group organized in a certain way.

You tell me to educate myself, but you assert the most plain bare understanding of what an LLM. "It's a big math function" is hilariously reductive. Our entire universe and everything within it can be represented by a big math function.

Like seriously. A big math function can't apply logic? That's like half of what math is.

An LLM is a big series of functions which are tuned to coordinate with one another to be able to accomplish literally any computation. These functions are special because they can be trained (within the length of a human time span) to find a solution to basically any problem.

That trainability means we can throw data at a few billion of these artificial neurons and over time they will learn to produce an accurate prediction of the next word for a given situation. What's that mean?

That means that if you invent a simple game, throw the text of that game into an LLM for a few thousand cycles of training, you can actually go into the LLM and find a rough representation of the game board that is being used to predict the next move.

It isn't just memorizing or reproducing, it literally recreated the logic required to predict the next move, and in doing so actually learned the problem space like a person would.

The big time LLMs of course are a lot more complicated because they are trying to learn literally the sum of all human knowledge we have thrown onto the internet.

But rest assured, the output of these large LLMs contains real understanding and prediction. It's not going to exist across all domains and problem spaces - but there is real knowledge and logic being applied.

Now an LLM doesn't operate on the same level humans do. It's not a continually thinking "experiencing" entity. But you're making a capital B big mistake if you assume for even a moment that because it doesn't think like a human means that it doesn't think or have understanding at all.

[–] Durotar@lemmy.ml 1 points 1 year ago (1 children)

You're manipulating. I've never said that we aren't bunch of cells and that our universe can't be represented by a math function. You think you were having your "I'm very smart" moment, but in reality you changed the actual subject of the argument, because you couldn't win it. None of what you said changes the fact that LLMs (at least current) can't think and apply logic. This has been proven by many researchers.

[–] bioemerl@kbin.social 0 points 1 year ago (1 children)

You're either a troll or hilariously stupid

[–] Durotar@lemmy.ml 2 points 1 year ago

OpenAI and other companies working on LLMs: we are not sure how exactly this works

Neuroscientists: we are not sure how exactly our brains work

bioemerl: I KNOW HOW ALL THIS WORKS AND IF YOU DO NOT AGREE YOU ARE EITHER TROLL OR JUST STUPID

Man, try being less ignorant and arrogant.