So AI:
- Scraped the entire internet without consent
- Trained on it
- Polluted it with AI generated rubbish
- Trained on that rubbish without consent
- Are now in need of lobotomy
So AI:
No it doesn't.
All this doomer stuff is contradicted by how fast the models are improving.
Good.
It's like a human centipede where only the first person is a human and everyone else is an AI. It's all shit, but it gets a bit worse every step.
have we tried feeding them actual human beings yet ?
Billionaires are the smartest, give them the most knowledge first!
Every single one of us, as kids, learned the concept of "garbage in, garbage out"; most likely in terms of diet and food intake.
And yet every AI cultist makes the shocked pikachu face when they figure out that trying to improve your LLM by feeding it on data generated by literally the inferior LLM you're trying to improve, is an exercise in diminishing returns and generational degradation in quality.
Why has the world gotten both "more intelligent" and yet fundamentally more stupid at the same time? Serious question.
Why has the world gotten both "more intelligent" and yet fundamentally more stupid at the same time? Serious question.
Because it's not actually always true that garbage in = garbage out. DeepMind's Alpha Zero trained itself from a very bad chess player to significantly better than any human has ever been, by simply playing chess games against itself and updating its parameters for evaluating which chess positions were better than which. All the system needed was a rule set for chess, a way to define winners and losers and draws, and then a training procedure that optimized for winning rather than drawing, and drawing rather than losing if a win was no longer available.
Face swaps and deep fakes in general relied on adversarial training as well, where they learned how to trick themselves, then how to detect those tricks, then improve on both ends.
Some tech guys thought they could bring that adversarial dynamic for improving models to generative AI, where they could train on inputs and improve over those inputs. But the problem is that there isn't a good definition of "good" or "bad" inputs, and so the feedback loop in this case poisons itself when it starts optimizing on criteria different from what humans would consider good or bad.
So it's less like other AI type technologies that came before, and more like how Netflix poisoned its own recommendation engine by producing its own content informed by that recommendation engine. When you can passively observe trends and connections you might be able to model those trends. But once you start actually feeding back into the data by producing shows and movies that you predict will do well, the feedback loop gets unpredictable and doesn't actually work that well when you're over-fitting the training data with new stuff your model thinks might be "good."
Another great example (from DeepMind) is AlphaFold. Because there's relatively little amounts of data on protein structures (only 175k in the PDB), you can't really build a model that requires millions or billions of structures. Coupled with the fact that getting the structure of a new protein in the lab is really hard, and that most proteins are highly synonymous (you share about 60% of your genes with a banana).
So the researchers generated a bunch of "plausible yet never seen in nature" protein structures (that their model thought were high quality) and used them for training.
Granted, even though AlphaFold has made incredible progress, it still hasn't been able to show any biological breakthroughs (e.g. 80% accuracy is much better than the 60% accuracy we were at 10 years ago, but still not nearly where we really need to be).
Image models, on the other hand, are quite sophisticated, and many of them can "beat" humans or look "more natural" than an actual photograph. Trying to eek the final 0.01% out of a 99.9% accurate model is when the model collapse happens--the model starts to learn from the "nearly accurate to the human eye but containing unseen flaws" images.
good commentary, covered a lot of ground - appreciate the effort to write it up :)
Because the dumdums have access to the whole world at the tip of the fingertip without having to put any efforts in.
In a time without that, they would be ridiculed for their stupid ideas and told to pipe down.
Now they can find like minded people and amplify their stupidity, and be loud about it.
So every dumdum becomes an AI prompt engineer (whatever the fuck that means) and know how to game the LLM, but do not understand how it works. So they are basically just snake oil salesmen that want to get on the gravy train.
Remember Trump every time he's weighed in on something, like suggesting injecting people with bleach, or putting powerful UV lights inside people, or fighting Covid with a "solid flu vaccine" or preventing wildfires by sweeping the forests, or suggesting using nuclear weapons to disrupt hurricane formation, or asking about sharks and electric boat batteries? Remember these? These are the types of people who are in charge of businesses, they only care about money, they are not particularly smart, they have massive gaps in knowledge and experience but believe that they are profoundly brilliant and insightful because they've gotten lucky and either are good at a few things or just had an insane amount of help from generational wealth. They have never had anyone, or very few people genuinely able to tell them no and if people don't take what they say seriously they get fired and replaced with people who will.
Because the people with power funding this shit have pretty much zero overlap with the people making this tech. The investors saw a talking robot that aced school exams, could make images and videos and just assumed it meant we have artificial humans in the near future and like always, ruined another field by flooding it with money and corruption. These people only know the word "opportunity", but don't have the resources or willpower to research that "opportunity".
Oh no. Anyways...
Two outcasts among their peers, Gary Wallace and Wyatt Donnelly spent a good deal of their youth as pioneers and early adopters of AI.
oh no are we gonna have to appreciate the art of human beings? ew. what if they want compensation‽
Oh no
Anyway
If mainstream blogs are writing about it, what would make someone think that AI companies haven't thoroughly dissected the problem and are already working on filtering out AI fingerprints from the training data set? If they can make a sophisticated LLM, chances are they can find methods to XOR out generated content.
What would make me think that they haven't "thoroughly dissected" it yet is that I'm a skeptic, and since I'm a skeptic I don't immediately and without evidence believe that every industry is capable of identifying, dissecting, and solving every problem with its products.
Ironically given their skillset, training an ML model on known and properly tagged AI generated and non-AI-generated stuff might actually work.
Fake news, just like that one time Nightshade "killed" stable diffusion (literally had no effect) Flux came out not long ago and it's better than ever
At this point the synthetic data is good enough to intentionally be used for training LLMs.
Yeah, just filter out the bad generated images and feed the good ones again, until the model learns how to produce only good ones.
It is their own fault for poisoning the internet with their slop.
DUDE ITS SO FUCKING ANNOYING TRYNNA USE GOOGLE IMAGES ANYMORE--
ALL IT GIVES ME IS AI ART. IM SO FUCKING SICK AND TIRED OF IT.
In case anyone doesn't get what's happening, imagine feeding an animal nothing but its own shit.
Photocopy of a photocopy is my go-to metaphor for model collapse.
Let's go, already!
How you can help: If you run a website and can filter traffic by user agent, get a list of the known AI scrapers agent strings and selectively redirect their requests to pre-generated AI slop. Regular visitors will see the content and the LLM scraper bots will scrape their own slop and, hopefully, train on it.
Are there any good lists of known AI user agents? Ideally in a dependency repo so my server can get the latest values when the list is updated.
Okay but I like using perchance cus they dont profit off anything 👉👈
a large chunk of that site is some dudes lil hobby project and its kinda neat interacting with the community and seein how the code works. Its the only bot I'll ever use cus they arent profiting off of other people shit. the only money they get is from ads and thats it.
Dont kill me with downvotes, I like making up cool OC concepts or poses n stuff and then drawing em.
Kind of like how true thoughts and opinions on complex topics are boiled down to digestible concepts for others to understand who then perpetuate those concepts without understanding them and the meaning degrades and we dont think anymore, just repeat stuff in social media comments.
Side note... this article sucks and seems like it was ai generated. Repetitive and no author credit? Just says it was originally posted elsewhere.
Generative AI isnt in danger of being killed as this clickbait titled suggests... just hindered.
Theres a link to the other article, in this article. Says Kristin Houser wrote it...although you may have a point about the rest.
ty
is it not relatively trivial to pre-vet content before they train it? at least with aigen text it should be.
The problem is these AI companies currently exist on the business model of not paying for information, and that generally includes not wanting to pay content curators.
Google is probably the only one in a position to potentially outsource by making everyone solve a "does this hand look normal to you" CAPTCHA
They can try and train AI to detect AI, but that's also difficult.
More like degenerative AIs
Model collapse is just a euphemism for “we ran out of stuff to steal”
or "we've hit a limit on what our new toy can do and here's our excuse why it won't get any better and AGI will never happen"
This is a most excellent place for technology news and articles.