this post was submitted on 16 Jul 2023
96 points (100.0% liked)

Technology

37713 readers
498 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

"Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease," they added. "We term this condition Model Autophagy Disorder (MAD)."

Interestingly, this might be a more challenging problem as we increase the use of generative AI models online.

you are viewing a single comment's thread
view the rest of the comments
[–] argv_minus_one@beehaw.org 45 points 1 year ago (7 children)

Note that humans do not exhibit this property when trained on other humans, so this would seem to prove that “AI” isn't actually intelligent.

[–] nrabulinski@beehaw.org 52 points 1 year ago

Almost as if current models are fancy token predictors with no reasoning about the input

[–] PenguinTD@lemmy.ca 14 points 1 year ago (1 children)

do we even need to prove this? Like anyone study a bit how generative AI works know it's not intelligent.

[–] lol3droflxp@kbin.social 6 points 1 year ago

There’s enough arguments about that even among highly intelligent people.

[–] h3ndrik@feddit.de 6 points 1 year ago (1 children)

Wasn't the echo chambers during the covid pandemic kind of proof that humans DO exhibit the same property? A good amount will start repeating stuff about nanoparticles and some black lint in a mask are worms that will control your brain?

[–] argv_minus_one@beehaw.org 0 points 1 year ago (1 children)

That only happened to some humans. Something must be seriously wrong with them.

[–] tanglisha@beehaw.org 3 points 1 year ago

Are we sure they were humans? Maybe they were ChatGPT 2.

[–] ParsnipWitch@feddit.de 6 points 1 year ago

Current AI is not actually "intelligent" and, as far as I know, not even their creators directly describe them as that. The programs and models existing at the moment aren't capable of abstract thinking oder reasoning and other processes that make an intelligent being or thing intelligent.

The companies involved are certainly eager to create something like a general intelligence. But even when they reach that goal, we don't know yet if such an AGI would even be truly intelligent.

[–] echo@sopuli.xyz 5 points 1 year ago* (last edited 1 year ago) (1 children)

I don't think LLM's are intelligent, but "does it work the same as humans" is a really bad way to judge something's intelligence

[–] frog@beehaw.org 14 points 1 year ago (1 children)

Even if we look at other animals, when they learn by observing other members of their own species, they get more competent rather than less. So AIs are literally the only thing that get worse when trained on their own kind, rather than better. It's hard to argue they're intelligent if the answer to "does it work the same as any other lifeform that we know of?" is "no".

[–] FaceDeer@kbin.social 2 points 1 year ago (2 children)

Are there any animals that only learn by observing the outputs of members of their own species? Or is it a mixture of that and a whole bunch of their own experiences with the outside world?

[–] frog@beehaw.org 4 points 1 year ago (1 children)

Humans (and animals) learn through a combination of their own experiences and observing the experiences of others. But this actually proves my point: if you feed an AI its own experiences (content it has created in response to prompts) and the experiences of other AIs (content they have produced in response to prompts), it cycles itself into oblivion. This is ultimately because it cannot create anything new.

This is why Model Autophagy Disease occurs, I think. Humans, when put in repetitive scenarios, will actively work to create new stimuli to respond to. This varies from livening up a boring day by doing something ridiculous, to hallucinating when kept in extreme sensory deprivation. The human mind's defence against repetitive stimuli is to literally create something new. But the AI's can't do that. They literally can't. They can't create anything that doesn't have a basis in their training data, and when exposed only to iterations of their own training data (which is ultimately what all AI-generated content is: iterations of the training data), there is no process that allows them to break out of that repetitive cycle. They end up just spiralling inwards.

From a certain perspective, AI's are therefore essentially parasites. They cannot progress without sucking in more human-generated content. They aren't self-sustaining on their own, because they literally cannot create the new ideas needed to prevent degradation of their own data sets.

From your other comments here, it seems like you're imagining a fully conscious mind sitting alone in a box, with nothing to react to. But that's not the case: AIs aren't sapient, going mad from a lack of stimulation. They are completely dormant until prompted to do something, and then they create an output that is statistically likely from the data set they've been trained on. If you add no new data, the AI doesn't change. It doesn't seek new stimuli. It doesn't create new ideas while waiting for someone to prompt it. The only way it can change and create anything new is if it's given more human-generated content to work with. If you give it content from other AI's, that alters the statistical probabilities behind its output. If the AIs were actually conscious minds sitting alone in boxes, then exposing them to content created by other AIs would, in fact, be new stimuli that could generate new ideas, in the same way that a lonely human meeting another lonely human would quickly strike up a conversation and get all kinds of ideas.

[–] theneverfox@pawb.social 1 points 1 year ago

You're caught up in an idea that has been going around since long before any AI systems had been built

Humans rarely, if ever, produce something new. We stumble upon a concept or apply one idea to another thing

Neural networks are carefully distilled entropy. They have no subjective biases and no foundation - they're so good at being original that they default to things useless to humans.

I like to think of training like a mold, or a filter. You only want things in the right shape to come through - the more you train, the more everything coming through looks the same.

[–] lol3droflxp@kbin.social 2 points 1 year ago (1 children)

I mean, it’s always a mixture but yes, animals can learn new behaviours purely by watching (corvids and monkeys for example).

[–] FaceDeer@kbin.social 1 points 1 year ago

"It's always a mixture" is the key part, though. We haven't run an experiment like this on a human or animal (and even if it were practical to do so it'd probably be horribly abusive).

[–] FaceDeer@kbin.social 2 points 1 year ago (1 children)

Humans are not entirely trained on other humans, though. We learn plenty of stuff from our environment and experiences. Note this very important part of the primary conclusion:

without enough fresh real data in each generation

[–] lol3droflxp@kbin.social 1 points 1 year ago (1 children)

Math for example is something one could argue is purely taught by humans.

[–] FaceDeer@kbin.social 3 points 1 year ago* (last edited 1 year ago)

Dogs can do math and I'm quite sure I've never taught my dog that deliberately.

Even for humans learning it, I would expect that most of our understanding of math comes from everyday usage of it rather than explicit rote training.

[–] lloram239@feddit.de 0 points 1 year ago (1 children)

Key point here being that humans train on other humans, not on themselves. They are also always exposed to the real world.

If you lock a human in a box and only let them interact with themselves they go a bit funny in the head very quickly.

[–] ParsnipWitch@feddit.de 5 points 1 year ago

The reason is different from what is happening with AI, though. Sensory deprivation or extreme isolation and the Ganzfeld effect lead to hallucinations because our brain seems to have to constantly react to stimuli in order to keep functioning. Our brain starts creating things from imagination.

With AI it is the other way around. They lose information when presented with the same data again and again because their statistical models look for probabilities.