213
submitted 1 year ago* (last edited 1 year ago) by cypherpunks@lemmy.ml to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] technojamin@beehaw.org 10 points 1 year ago

LLMs compress data, there’s no way ChatGPT could remember every detail of the book alongside all the other information it stores in its encodings. The issue isn’t whether the entire text of the book is contained within the encodings, it’s whether it was trained on the book in the first place.

[-] ISMETA@lemmy.zip 1 points 1 year ago

GPT3 is 800GB while the entirety of the English Wikipedia is around 10GB compressed. So yeah it doesn't store evey detail of everything but LLMs do memorize a lot of things verbatim. Also see https://bair.berkeley.edu/blog/2020/12/20/lmmem/

this post was submitted on 10 Jul 2023
213 points (100.0% liked)

Technology

37574 readers
215 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS