this post was submitted on 08 Oct 2023
40 points (90.0% liked)

AI

4201 readers
1 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
top 50 comments
sorted by: hot top controversial new old
[–] CarbonIceDragon@pawb.social 29 points 1 year ago

Fundamentally, anything humans can do can be done by physical systems of some kind, (because humans are already such a system), so given enough time I'd bet that it would be eventually possible to make a machine do literally anything that can be done by a human. There might be some things that nobody ever does get an AI to replicate even if technically possible though, just because of not having a motivation to

[–] Crackhappy@lemmy.world 16 points 1 year ago (1 children)

The flavor of cinnamon toast crunch.

[–] fsxylo@sh.itjust.works 3 points 1 year ago

Was my exact thought. Lol

[–] riskable@programming.dev 13 points 1 year ago (2 children)

Since AI is trained by us, using the fruit of human labor as input, it'll have to be something we can't train it to do.

Something biological or instinctual... Like being in close proximity to an AI will never result in synchronized menstruation since an AI can't and won't ever menstruate.

So... That 👍

[–] idiomaddict@feddit.de 7 points 1 year ago

Synched Menstruation is supposed to be a myth now. I have experienced it many times, but I guess it’s mostly considered coincidence, which it could be, I’m not a mathematician.

[–] bouh@lemmy.world 3 points 1 year ago

What would you bet on AI not ever getting the ability to menstruate?

[–] kromem@lemmy.world 10 points 1 year ago (1 children)

An exact 1:1 realtime copy of itself emulated within a simulated universe.

Pretty much everything else mentioned in this thread falls into the "never say never" category.

[–] noli@programming.dev 5 points 1 year ago (1 children)

Also being able to analyze any program and guarantee it will stop

[–] kromem@lemmy.world 4 points 1 year ago (2 children)

Probably still a never say never problem:

In their new paper, the five computer scientists prove that interrogating entangled provers makes it possible to verify answers to unsolvable problems, including the halting problem.

load more comments (2 replies)
[–] Phantom_Engineer@lemmy.ml 10 points 1 year ago (2 children)
[–] NateNate60@lemmy.ml 5 points 1 year ago (1 children)

Computers will never consistently beat humans and humans will never consistently beat computers as snakes and ladders.

Or rock-paper-scissors, for that matter.

load more comments (1 replies)
[–] Seleni@lemmy.world 2 points 1 year ago

Calvinball! All hail Watterson lol

[–] socsa@lemmy.ml 9 points 1 year ago (2 children)

Pretty sure it won't manage Ligma any time soon

[–] deezbutts@lemm.ee 3 points 1 year ago

They will when they perfect the bofa fill algorithm

[–] theterrasque@infosec.pub 2 points 1 year ago

There are already specialized robots just for that

[–] Granixo 8 points 1 year ago (1 children)

Pooping, my guess is pooping.

[–] blight@hexbear.net 5 points 1 year ago
[–] imgprojts@lemmy.ml 5 points 1 year ago

Giving everyone money for free from the rich people! Yeah, that's right... wealth redistribution! AI won't ever be able to do that.

[–] PP_BOY_@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (1 children)

Organic intelligence? The qualifier never kind of removes a lot of answers when you also say "never"

[–] bouh@lemmy.world 1 points 1 year ago (1 children)

A bit fallacious to add "organic" to intelligence. But then I'm sure we will be able to make organic computers at some point. I think there is research into this already.

[–] Sethayy@sh.itjust.works 2 points 1 year ago (1 children)

Yeah there's that one thing they were using rat cells or whatever and got computations off of it

[–] PP_BOY_@lemmy.world 1 points 1 year ago

Yes, yes, we all know about how OP struggled in high school mathematics

[–] Fleur__@lemmy.world 4 points 1 year ago (1 children)

Stupid posts like this one

[–] RickyRigatoni@lemmy.ml 3 points 1 year ago

Stupid comments like this one

And this one

And that one

And those ones over there

[–] RagnarokOnline@reddthat.com 4 points 1 year ago

Cracking my knuckles nervously before I’m about to give a presentation in front of the whole class.

[–] ilovetacos@lemmy.world 4 points 1 year ago (1 children)
load more comments (1 replies)
[–] fubarx@lemmy.ml 4 points 1 year ago (1 children)

Truly creative, decent Dad Jokes.

[–] RGB3x3@lemmy.world 3 points 1 year ago (1 children)

I don't know, there are a couple pretty good ones here by chatgpt:

Of course! Here are some classic dad jokes for you:

  1. Why don't skeletons fight each other? They don't have the guts.
  1. Did you hear about the cheese factory that exploded? There was nothing left but de-brie.
  2. I used to play piano by ear, but now I use my hands.
  3. What do you call a fish with no eyes? Fsh.
  4. Why did the scarecrow win an award? Because he was outstanding in his field.
  5. What's brown and sticky? A stick.
  6. How does a penguin build its house? Igloos it together.
  7. I'm reading a book on anti-gravity. It's impossible to put down.
  8. Parallel lines have so much in common. It's a shame they'll never meet.
  9. Did you hear about the mathematician who's afraid of negative numbers? He'll stop at nothing to avoid them.
[–] EM1sw@lemmy.world 4 points 1 year ago (1 children)

Most of those predate the internet

[–] Sethayy@sh.itjust.works 1 points 1 year ago

then it truly is human, cause shit if 99% of us nowadays aren't just googling dad jokes when we need them

[–] Zeth0s@lemmy.world 4 points 1 year ago (2 children)

How humans think. AI "thinking" will always be different than human thinking. Because human brain is "that thing" that is impossibile to simulate in silico as is. We might be able to have good approximations, but as good as they can get, they'll always diverge from the real thing

[–] Spzi@lemm.ee 5 points 1 year ago

I guess a good part also comes from learned experiences. Having a body, growing up, feeling pain, being mortal.

And yes, the brain is an incredibly complex system not only of neurons, but also transmitters, receptors, a whole truckload of biochemistry.

But in the end, both are just matter in patterns, excitation in coordination. The effort to simulate is substantial, but I don't see how that would NEVER succeed, if someone with the capabilities insisted on it. However, it might be fully sufficient for the task (whatever that is, probably porn) to simulate 95% or so, technically still not the real deal.

[–] Sethayy@sh.itjust.works 4 points 1 year ago (1 children)

What makes you say that so definitely?

Funny enough I have the opposite opinion, human brains are the type of thinking we have most experience with - so we've devised our input methods around what we notice most, and so will be able to most easily train the AI.

I also believe that we'll be abke to reduce the noise to a level lower than actual person variation fairly easily, cause an AI has the benefit of being able to scale to a populous size - no human even has that much experience with humans

[–] Zeth0s@lemmy.world 4 points 1 year ago (1 children)

I use to work on research on microscopic mechanisms of the brain, and I work in AI.

Human thoughts derive from extremely complex microscopic mechanisms, that do not "average out" when moving to the macroscopic world, but instead create very complex non-linear stochastic process that are thoughts.

Unless some scientific miracle happens, human thoughts will stay human.

[–] Sethayy@sh.itjust.works 3 points 1 year ago (1 children)

But an AI does anything but average out, else we wouldn't be any more advanced than the earliest mathematicians.

Its skill comes from being able to have millions to billions of parameters if required, and then contain data within them all.

It doesn't seem entirely unreasonable that it could use those (riding off our suprisingly good math skills) and create a model that represents a human with low enough noise we wouldn't even notice.

(but also I'm in a similar more chemically focused field, nanotechnology so I have experience with nanoscopic-microscopic structures, and what can we artificially build from them while not killing the biological side of things)

[–] Zeth0s@lemmy.world 3 points 1 year ago* (last edited 1 year ago) (1 children)

As you are in nanotechnologies, when I say average out I am talking in a statistical mechanics way, i.e. the macroscopic phenomenon arising from averaging over the multiple accessible microscopic configurations. Thoughts do not arise like this, they are the results of multiple complex non linear stochastic signals. They depend on a huge amount of single microscopic events, that are not replicabile in a computer as is, and likely not reproducible in a parametrized function. Nothing wrong with that, we might be able to approximate human thoughts, most likely not reproduce them.

What area of nanotechnology are you? Main problem of nanotechnologies is that they cannot reproduce the complexity of the biological counterparts. Take carbon nanotubes, we cannot reproduce the features of the simpler ion channels with them, let alone the more complex human ones.

We could build nice models, with interesting functionality, as we are doing with current AI. Machines that can do logic, take decisions, and so on. Even a machine that can predict human thoughts. But they'll do it in their way, while the real human thoughts will most likely stay human, as the processes from which they arise are very human

[–] Sethayy@sh.itjust.works 2 points 1 year ago (1 children)

nano engineering, and course were talking some years in the future, but if anything nano's convinced me were all just math when you break it down - when just depends on how much math we can do.

Even a simple conversation can be broken down into tokenizable words recently and bam chatgpt, reasonably the rest of our 'humanity' could be modeled following a similar trend until the Turing test is useless

[–] Zeth0s@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

What I mean is different. A dog thinks as a dog, a human thinks as a human, an AI will think as an AI. It will likely be able to pretend to think as a human, but it won't think as one.

It won't have a Proust's madalaine (sensorial experiences that trigger epiphanies), have the need to travel to some "sacred" location looking for spirituality, miss the hometown were it grew up, its thinking won't be driven by fears of spiders, need of social recognition, pleasure to see naked women. It's thoughts won't be dependent on the daily diet, on the amount of sugar, fat, vitamins, stimulants intake.

These are simple examples, but in general it will think in a different way. Humans will tune it to pretend to be "as human as possible", but humans will remain unique

[–] k5nn@lemmy.world 4 points 1 year ago

I'd like to be proven wrong but Empathy

[–] Grayox@lemmy.ml 2 points 1 year ago
[–] Phegan@lemmy.world 2 points 1 year ago (1 children)
[–] K0W4LSK1@lemmy.ml 2 points 1 year ago

I was gonna say left turns lol

[–] marcell@lemmy.ml 2 points 1 year ago

feel superior after being witty.

load more comments
view more: next ›