this post was submitted on 24 Jul 2024
249 points (97.0% liked)

Technology

59207 readers
3055 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 35 comments
sorted by: hot top controversial new old
[–] BombOmOm@lemmy.world 97 points 3 months ago (5 children)

Yep. It leads to a positive feedback loop. They just continue to self-reinforce whatever came out before.

And with increasing amounts of the internet being polluted with AI text output....

[–] Ensign_Crab@lemmy.world 89 points 3 months ago (3 children)
[–] skillissuer@discuss.tchncs.de 62 points 3 months ago

hapsburgGPT

[–] Boozilla@lemmy.world 14 points 3 months ago (3 children)

We call it the GRRM model.

[–] Sibbo@sopuli.xyz 29 points 3 months ago

In the USA, they call it the AlaLlama model.

[–] bionicjoey@lemmy.ca 5 points 3 months ago

GPTargaryen

[–] sp3tr4l@lemmy.zip 1 points 3 months ago

What about the Grrr! model after that astoundingly XD So Random! thing from Invader Zim?

He's an android or robot, right?

[–] MagicShel@programming.dev 17 points 3 months ago

That seems so obviously predictable.

[–] kevincox@lemmy.ml 16 points 3 months ago (1 children)

To be fair this doesn't sound much different than your average human using the internet.

[–] sp3tr4l@lemmy.zip 4 points 3 months ago

2024, Reverse Turing Test Challenge:

Can an LLM AI differentiate between human input and LLM AI input?

[–] Even_Adder@lemmy.dbzer0.com 9 points 3 months ago* (last edited 3 months ago)

You have to pretty much intentionally give it enough synthetic data to wreck it. OpenAI and Anthropic train their models on generated data to improve them. As long as there's supervision during training, which there always will be, this isn't really a problem.

https://openai.com/index/prover-verifier-games-improve-legibility/

https://www.anthropic.com/research/claude-character

[–] Tobberone@lemm.ee 8 points 3 months ago

Well... Its built on statistics and statistical inference will return to the mean eventually. If all it ever gets to train on is closer and closer to the mean, there will be nothing left to work with. It will all be the average...

[–] TimeSquirrel@kbin.melroy.org 58 points 3 months ago* (last edited 3 months ago) (2 children)

This has been obvious for a while to those of us using GitHub Copilot for programming. Start a function, and then just keep hitting tab to let it autotype based on what it already wrote. It quickly devolves into strange and random bullshit. You gotta babysit it.

[–] 0laura@lemmy.world 13 points 3 months ago (1 children)

very unlikely to stem from model collapse. why would they use a worse model? it's probably because they neutered it or gave it less resources.

[–] TimeSquirrel@kbin.melroy.org 11 points 3 months ago* (last edited 3 months ago) (1 children)

It learns from your own code as you type so it can offer more relevant suggestions unlike the web-based LLMs. So you can make it feed back on itself.

[–] booly@sh.itjust.works 11 points 3 months ago

Where did you learn to write such shitty code?

I learned it from watching you!

[–] nekusoul@lemmy.nekusoul.de 9 points 3 months ago (1 children)

Same thing with Stable Diffusion if you've ever used a generated image as an input and repeated the same prompt. You basically get a deep-fried copy.

[–] FaceDeer@fedia.io 3 points 3 months ago (1 children)

img2img is not "training" the model. Completely different process.

[–] nekusoul@lemmy.nekusoul.de 1 points 3 months ago

Oh yeah, you're right. It's both degradation in some way, but through entirely different causes.

[–] sp3tr4l@lemmy.zip 48 points 3 months ago* (last edited 3 months ago) (1 children)

Holy shit are you telling me...

Garbage In...

= Garbage Out?

No, that can't be it, throw billions and billions of dollars at this instead of, I don't know, housing the homeless.

[–] FaceDeer@fedia.io 4 points 3 months ago

You realize that those "billions of dollars" have actually resulted in a solution to this? "Model collapse" has been known about for a long time and further research figured out how to avoid it. Modern LLMs actually turn out better when they're trained on well-crafted and well-curated synthetic data.

Honestly, everyone seems to assume that machine learning researchers are simpletons who've never used a photocopier before.

[–] fubarx@lemmy.ml 27 points 3 months ago

No shit. People have known about the perils of feeding simulator output back in as input for eons. The variance drops off so you end up with zero new insights and a gradual worsening due to entropy.

[–] jet@hackertalks.com 19 points 3 months ago

Garbage in garbage out

It's an old expression, but it still checks out

[–] Zip2@feddit.uk 18 points 3 months ago (1 children)

So it’s basically an AI prion disease?

[–] Llewellyn@lemm.ee 2 points 3 months ago
[–] SlopppyEngineer@lemmy.world 14 points 3 months ago (2 children)

Eventually an AI will be developed that can learn with much less data. In the end we don't need to read the entire internet to get through our education. But, that's not going to be LLM. No matter how much you tweak LLM models, it won't get there. It's like trying to tune a coal fired steam powered car until you can compete in a formula 1 race.

[–] conciselyverbose@sh.itjust.works 17 points 3 months ago (1 children)

Yeah, it's entirely plausible that LLMs are a small part of the answer as basically the language center of the brain, but the brain is a hell of a lot more complex than that. The language center isn't your whole brain, and is only loosely connected to actual decision making. It confabulates a lot.

[–] SlopppyEngineer@lemmy.world 19 points 3 months ago

OpenAI stumbled on something that worked and ran with it, and people started proclaiming it to be the answer to everything. The same happened with Deep Learning and every AI invention so far. It's all just another stepping stone on the way.

[–] Even_Adder@lemmy.dbzer0.com 15 points 3 months ago

It's already happening. A quote from Andrej Karpathy :

Turns out that LLMs learn a lot better and faster from educational content as well. This is partly because the average Common Crawl article (internet pages) is not of very high value and distracts the training, packing in too much irrelevant information. The average webpage on the internet is so random and terrible it's not even clear how prior LLMs learn anything at all.

[–] ConstipatedWatson@lemmy.world 7 points 3 months ago

You don't say, Sherlock

[–] YeetPics@mander.xyz 6 points 3 months ago

So do humans if I'm being honest, look at the RNC.

[–] lemmy_get_my_coat@lemmy.world 5 points 3 months ago
[–] FartsWithAnAccent@fedia.io 1 points 3 months ago
[–] Paragone@lemmy.world 1 points 3 months ago

( Horseshack voice: )

Oh! Oh! Oh! Mr Kotter!

YOU MEAN FILTER-BUBBLES DO THE SAME THING TO BOTH HUMANS AND AIs??


How Very Incredibly Surprising(tm), Oh, My!

/s

_ /\ _