this post was submitted on 02 Oct 2023
165 points (89.5% liked)

Technology

34987 readers
212 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] anachronist@midwest.social 3 points 1 year ago (1 children)

Models don’t get bigger as you add more stuff.

They will get less coherent and/or "forget" the earlier data if you don't increase the parameters with the training set.

There are two-gigabyte networks that have been trained on hundreds of millions of images

You can take a huge tiff of an image, put it through JPEG with the quality cranked all the way down and get a tiny file out the other side, which is still a recognizable derivative of the original. LLMs are extremely lossy compression of their training set.

[–] mindbleach@sh.itjust.works 4 points 1 year ago

which is still a recognizable derivative of the original

Not in twelve bytes.

Deep models are a statistical distillation of a metric shitload of data. Smaller models with more training on more data don't get worse, they get more abstract - and in adversarial uses they often kick big networks' asses.