this post was submitted on 29 Sep 2023
439 points (93.6% liked)

Technology

59243 readers
3375 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

you are viewing a single comment's thread
view the rest of the comments
[–] brygphilomena@lemmy.world 11 points 1 year ago (2 children)

A human, regardless of how many books they read, will have personal experiences that are undeniably unique to themselves. They will interpret the works they read differently from each other based on their worldly experiences. Their writing, no matter how many books they read and get inspired on, will always be influenced by their own personal lives. They can experience love, hate, heartbreak, empathy, sadness, and happiness.

This is something a LLM does not have, and in my opinion, is a massive distinguishing factor. So on a "fundamental" level, it is not the same. It is no where near the same.

[–] lloram239@feddit.de 1 points 1 year ago

A human, regardless of how many books they read, will have personal experiences that are undeniably unique to themselves.

So will every AI. ChatGPT will give you different answers than Bard or WizardLM, since they are all trained on different books. And every StableDiffusion model creates different images, different styles, different topics, etc. It's all in the data they "experienced".

do you really think we are that far off... from giving a foundational memory and motivation layers to these LLMs, that could mimic.. or even.. generate the generic thoughts youre indicating?

i dont think so. you seem to imply its impossibility, i expect its inevitability. the human brain will not be a black box forever... it still exists in a world of physics we can emulate, even if rudimentary.