this post was submitted on 28 Jul 2023
463 points (93.6% liked)
Technology
59593 readers
2944 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Your assertion that a future AI detector will be able to detect current LLM output is dubious. If I give you the sentence "Yesterday I went to the shop and bought some milk and eggs." There is no way for you or any detection system to tell if that was AI generated or not with any significant degree of certainty. What can be done is statistical analysis of large data sets to see how they "smell", but saying around 30% of this dataset is likely LLM generated does not get you very far in creating a training set.
I'm not saying that there is no solution to this problem, but blithely waving away the problem saying future AI will be able to spot old AI is not a serious take.
If you give me several paragraphs instead of a single sentence, do you still think it's impossible to tell?
"If you zoom further out you can definitely tell it's been shopped because you can see more pixels."
What they're getting towards (one thing, anyways) is that "indistinguishable to the model" and "the same" are two very different things.
IIRC, one possibility is that LLMs which learn from one another will make such incremental changes to what's considered "acceptable" or "normal" language structuring that, over time, more noticeable linguistic changes begin to emerge that go unnoticed by the models.
As it continues, this phenomena creates a "positive feedback loop" in which the gap progressively widens -- still undetected, because the quality of training data is going down -- to the point where models basically "collapse" in their effectiveness.
So even if their output is indistinguishable now, how the tech is used (I guess?) will determine whether or not a self-destructive LLM echo chamber is produced.