this post was submitted on 04 Apr 2024
575 points (83.5% liked)

Memes

45653 readers
1369 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Prunebutt@slrpnk.net 18 points 7 months ago (1 children)

AI creates output from a stochastic model of its' training data. That's not a creative process.

[–] Even_Adder@lemmy.dbzer0.com 5 points 7 months ago (2 children)

What does that mean, and isn't that still something people can employ for their creative process?

[–] Prunebutt@slrpnk.net 14 points 7 months ago (1 children)

LLMs analyse their inputs and create a stochastic model (i.e.: a guess of how randomness is distributed in a domain) of which word comes next.

Yes, it can help in a creative process, but so can literal noise. It can't "be creative" in itself.

[–] Even_Adder@lemmy.dbzer0.com 5 points 7 months ago (1 children)

How that preclude these models from being creative? Randomness within rules can be pretty creative. All life on earth is the result of selection on random mutations. Its output is way more structured and coherent than random noise. That's not a good comparison at all.

Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.

[–] Prunebutt@slrpnk.net 8 points 7 months ago (1 children)

How that preclude these models from being creative?

They lack intentionality, simple as that.

Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.

Yup, my original point still stands.

[–] Even_Adder@lemmy.dbzer0.com 7 points 7 months ago (1 children)

How is intentionality integral to creativity?

[–] Prunebutt@slrpnk.net 8 points 7 months ago (1 children)

Are you serious?

Intentionality is integral to communication. Creative art is a subset of communication.

[–] Even_Adder@lemmy.dbzer0.com 5 points 7 months ago (1 children)

I was asking about creativity, not art. It's possible for something to be creative and not be art.

[–] Prunebutt@slrpnk.net 9 points 7 months ago (1 children)

I still posit that ceativity requires intentionality.

[–] Even_Adder@lemmy.dbzer0.com 6 points 7 months ago (1 children)

I don't think all creativity requires intentionality. Some forms of creativity are the accumulation of unintentional outcomes, like when someone sets out to copy a thing, but due to mistakes or other factors outside their control end up with something unique to what they were going for.

[–] Prunebutt@slrpnk.net 3 points 7 months ago (1 children)

The intentionality steps in when it is decided to keep or discard the outcome.

[–] Even_Adder@lemmy.dbzer0.com 2 points 7 months ago (1 children)

How can it be creative to destroy outcomes? Destruction is the opposite of creativity.

[–] irmoz@reddthat.com 6 points 7 months ago (1 children)

The creative process necessarily involves abandoning bad ideas and refining to something more intentional

[–] Prunebutt@slrpnk.net 1 points 7 months ago (1 children)

Exactly. That is literally the only difference between "creative" and "non-creative" people.

[–] Even_Adder@lemmy.dbzer0.com 3 points 7 months ago (1 children)

But you can still be creative if you keep every outcome, it would be very hard to prove creativity if you discard everything. The one could argue you're creative the moment you select something.

[–] Prunebutt@slrpnk.net -1 points 7 months ago

What point are you trying to make, again?

[–] irmoz@reddthat.com 5 points 7 months ago* (last edited 7 months ago) (2 children)

A person sees a piece of art and is inspired. They understand what they see, be it a rose bush to paint or a story beat to work on. This inspiration leads to actual decisions being made with a conscious aim to create art.

An AI, on the other hand, sees a rose bush and adds it to its rose bush catalog, reads a story beat and adds to to its story database. These databases are then shuffled and things are picked out, with no mind involved whatsoever.

A person knows why a rose bush is beautiful, and internalises that thought to create art. They know why a story beat is moving, and can draw out emotional connections. An AI can't do either of these.

[–] Even_Adder@lemmy.dbzer0.com 7 points 7 months ago (2 children)

The way you describe how these models work is wrong. This video does a good job of explaining how they work.

[–] PipedLinkBot@feddit.rocks 2 points 7 months ago

Here is an alternative Piped link(s):

This video

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] irmoz@reddthat.com 2 points 7 months ago

Yeah, I know it doesn't actually "see" anything, and is just making best guesses based on pre-gathered data. I was just simplifying for the comparison.

[–] agamemnonymous@sh.itjust.works 5 points 7 months ago (2 children)

A person is also very much adding rose bushes and story beats to their internal databases. You learn to paint by copying other painters, adding their techniques to a database. You learn to write by reading other authors, adding their techniques to a database. Original styles/compositions are ultimately just a rehashing of countless tiny components from other works.

An AI understands what they see, otherwise they wouldn't be able to generate a "rose bush" when you ask for one. It's an understanding based on a vector space of token sequence weights, but unless you can describe the actual mechanism of human thought beyond vague concepts like "inspiration", I don't see any reason to assume that our understanding is not just a much more sophisticated version of the same mechanism.

The difference is that we're a black box, AI less so. We have a better understanding of how AI generates content than how the meat of our brain generates content. Our ignorance, and use of vague romantic words like "inspiration" and "understanding", is absolutely not proof that we're fundamentally different in mechanism.

[–] irmoz@reddthat.com 3 points 7 months ago (1 children)

A person painting a rose bush draws upon far more than just a collection of rose bushes in their memory. There's nothing vague about it, I just didn't feel like getting into much detail, as I thought that statement might jog your memory of a common understanding we all have about art. I suppose that was too much to ask.

For starters, refer to my statement "a person understands why a rose bush is beatiful". I admit that maybe this is vague, but let's unpack.

Beaty is, of course, in the eye of the beholder. It is a subjective thing, requiring opinion, and AIs cannot hold opinions. I find rose bushes beautiful due to the inherent contrast between the delicate nature of the rose buds, and the almost monstrous nature of the fronds.

So, if I were to draw a rose bush, I would emphasise these aspects, out of my own free will. I might even draw it in a way that resembles a monster. I might even try to tell a story with the drawing, one about a rose bush growing tired of being pkucked, and taking revenge on the humans who dare to steal its buds.

All this, from the prompt "draw a rose bush".

What would an AI draw?

Just a rose bush.

[–] agamemnonymous@sh.itjust.works 5 points 7 months ago (1 children)

"Beauty", "opinion", "free will", "try". These are vague, internal concepts. How do you distinguish between a person who really understands beauty, and someone who has enough experience with things they've been told are beautiful to approximate? How do you distinguish between someone with no concept of beauty, and someone who sees beauty in drastically different things than you? How do you distinguish between the deviations from photorealism due to imprecise technique, and deviations due to intentional stylistic impressionism?

What does a human child draw? Just a rosebush, poorly at that. Does that mean humans have no artistic potential? AI is still in relative infancy, the artistic stage of imitation and technique refinement. We are only just beginning to see the first glimmers of multi-modal AI, recursive models that can talk to themselves and pass information between different internal perspectives. Some would argue that internal dialogue is precisely the mechanism that makes human thought so sophisticated. What makes you think that AI won't quickly develop similar sophistication as the models are further developed?

[–] irmoz@reddthat.com 0 points 7 months ago* (last edited 7 months ago) (1 children)

Philosophical masturbation, based on a poor understanding of what is an already solved issue.

We know for a fact that a machine learning model does not even know what a rosebush is. It only knows the colours of pixels that usually go into a photo of one. And even then, it doesn't even know the colours - only the bit values that correspond to them.

That is it.

Opinions and beauty are not vague, and nor are free will and trying, especially in this context. You only wish them to be for your argument.

An opinion is a value judgment. AIs don't have values, and we have to deliberately restrict them to stop actual chaos happening.

Beauty is, for our purposes, something that the individual finds worthy of viewing and creating. Only people can find things beautiful. Machine learning algrorithms are only databases with complex retrieval systems.

Free will is also quite obvious in context: being able to do something of your own volition. AIs need exact instructions to get anything done. They can't make decisions beyond what you tell them to do.

Trying? I didn't even define this as human specific

[–] agamemnonymous@sh.itjust.works 3 points 7 months ago (1 children)

Philosophical masturbation

I couldn't have put it better myself. You've said lots of philosophical words without actually addressing any of my questions:

How do you distinguish between a person who really understands beauty, and someone who has enough experience with things they've been told are beautiful to approximate?

How do you distinguish between someone with no concept of beauty, and someone who sees beauty in drastically different things than you?

How do you distinguish between the deviations from photorealism due to imprecise technique, and deviations due to intentional stylistic impressionism?

[–] irmoz@reddthat.com -2 points 7 months ago* (last edited 7 months ago) (2 children)

I couldn’t have put it better myself. You’ve said lots of philosophical words without actually addressing any of my questions:

Did you really just pull an "I know you are, but what am I?"

I'm not gonna entertain your attempt to pretend very concrete concepts are woollier and more complex than they are.

If you truly believe machine learning has even begun to approach being compared to human cognition, there is no speaking to you about this subject.

https://www.youtube.com/watch?v=EUrOxh_0leE&pp=ygUQYWkgZG9lc24ndCBleGlzdA%3D%3D

Every step of the way, a machine learning model is only making guesses based on previous training data. And not what the data actually is, but the pieces of it. Do green pixels normally go here? Does the letter "k" go here?

[–] PipedLinkBot@feddit.rocks 2 points 7 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=EUrOxh_0leE&pp=ygUQYWkgZG9lc24ndCBleGlzdA%3D%3D

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] agamemnonymous@sh.itjust.works 1 points 7 months ago (1 children)

What evidence do you have that human cognition is functionally different? I won't argue that humans are more sophisticated for sure. But what justification do you have to claim that humans aren't just very, very good at making guesses based on previous training data?

[–] irmoz@reddthat.com 0 points 7 months ago (1 children)

I'm really struggling to believe that you actually think this.

[–] agamemnonymous@sh.itjust.works 2 points 7 months ago (1 children)

I'm sorry that you're struggling. Perhaps if you answered any of the questions I posed (twice) in order to frame the topic in a concrete way, we could have a more productive conversation that might provide elucidation for one, or both, of us. I fail to see how continuing to ignore those core questions, and instead focusing on questions that weren't asked, will help either one of us.

[–] irmoz@reddthat.com 0 points 7 months ago

I don't make a habit of answering irrelevant red herrings.

[–] Prunebutt@slrpnk.net 2 points 7 months ago (1 children)

You're presupposing that brains and computers are basically the same thing. They are fundamentally different.

An AI doesn't understand. It has an internal model which produces outputs, based on the training data it received and a prompt. That's a different cathegory than "understanding".

Otherwise, spotify or Youtube recommendation algorithms would also count as understanding the contents of the music/videos they supply.

[–] agamemnonymous@sh.itjust.works 2 points 7 months ago (1 children)

An AI doesn't understand. It has an internal model which produces outputs, based on the training data it received and a prompt. That's a different cathegory than "understanding".

Is it? That's precisely how I'd describe human understanding. How is our internal model, trained on our experiences, which generates responses to input, fundamentally different from an LLM transformer model? At best we're multi-modal, with overlapping models which we move information between to consider multiple perspectives.

[–] Prunebutt@slrpnk.net 0 points 7 months ago

How is our internal model [...] fundamentally different from an LLM transformer model?

Humans have intentionality.

Emily M Bender can explain it better than I can