this post was submitted on 24 Jan 2024
7 points (54.1% liked)
Technology
59243 readers
3264 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've been saying this all along. Language is how humans communicate thoughts to each other. If a machine is trained to "fake" communication via language then at a certain point it may simply be easier for the machine to figure out how to actually think in order to produce convincing output.
We've seen similar signs of "understanding" in the image-generation AIs, there was a paper a few months back about how when one of these AIs is asked to generate a picture the first thing it does is develop an internal "depth map" showing the three-dimensional form of the thing it's trying to make a picture of. Because it turns out that it's easier to make pictures of physical objects when you have an understanding of their physical nature.
I think the reason this gets a lot of pushback is that people don't want to accept the notion that "thinking" may not actually be as hard or as special as we like to believe.
This whole argument hinges on consciousness being easier to produce than to fake intelligence to humans.
Humans already anthropomorphise everything, so I'm leaning towards the latter being easier.
I'd take a step farther back and say the argument hinges on whether "consciousness" is even really a thing, or if we're "faking" it to each other and to ourselves as well. We still don't have a particularly good way of measuring human consciousness, let alone determining whether AIs have it too.
...or even if consciousness is an emergent property of interactions between certain arrangements of matter.
It's still a mystery which I don't think can be reduced to weighted values of a network.
This is a really interesting train of thought!
I don’t mean to belittle the actual, real questions here, but I can’t shake the hilarious image of 2 dudes sitting around in a basement, stoned out of their minds getting “deep.”
Bold of you to assume any philosophical debate doesn't boil down to just that.
Now I get it. That dude is explaining the Boltzmann brain.
Brah, if an AI was conscious, how would it know we are sentient?! Checkmate LLMs.
Or maybe our current understanding of conscious and intelligence is wrong and they are not related to each other. A non conscious thing can perform advanced logic like the Geometrical patterns found within the overlapping orbits of planets, the Fibonacci being found about everywhere. We also have yet to proof that individual strands of grass or rocks aren't fully consciousness. There is so much we don't know for certain its perplexing how we believe we can just assume.
Standard descent into semantics incoming...
We define concepts like consciousness and intelligence. They may be related or may not depending on your definitions, but the whole premise here is about experience regardless of the terms we use.
I wouldn't say Fibonacci being found everywhere is in any way related to either and is certainly not an expression of logic.
I suspect it's something like the simplest method nature has of controlling growth. Much like how hexagons are the sturdiest shape, so appear in nature a lot.
Grass/rocks being conscious is really out there! If that hypothesis was remotely feasible we couldn't talk about things being either consciousness or not, it would be a sliding scale with rocks way below grass. And it would be really stretching most people's definition of consciousness.
I understand what you're saying but i disagree that there is any proper defining of the concept. The few scientist that attempt to study it can't even agree on what it even is.
I agree that my example where far out, they are supposed to be to represent ideas outside the conventional box. I don't literally believe grass is conscious. I recognize that if i/we don't know, then i/we don't know. In the face of something we don't know the nature off, the requirements for, the purpose it serves i prefer to remain open to every option.
I know Wikipedia isn't a scientific research paper but i expect that if there really is a agreed upon scientific answer it wouldn't be like it currently is:
"Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked."
I feel like an AI right now having predicted the descent into semantics.
I fear it was inevitable, with no framework where we can agree upon semantics are all there is.
I truly wish we humanity had more knowledge to have a more proper discussion but currently it seems unproductive, especially in the context of a faceless online forum debate between 2 strangers.
Thank you for your time, and input on this matter.
The bar always gets raised for what counts as actual "AI" with each advancement too. Back in the 60s, the procedural AI of the 80s and 90s would have fit the bill, but at the time, we said "nope, not good enough". And so it kept getting better and better, each time surpassing the old tech by leaps and bounds. Still, not "true" AI. Now we have ChatGPT, which some still refuse to call "AI".
We're going to eventually have fully sentient artificial beings walking around amongst us and these people are going to end up being an existential threat to them, I can see it now.
Think you're slightly missing the point. I agree that LLMs will get better and better to a point where interacting with one will be indistinguishable from interacting with a human. That does not make them sentient.
The debate is really whether all of our understanding and human experience of the world comes down to weighted values on a graph or if the human brain is hiding more complex, as-yet-undiscovered, phenomena than that.