this post was submitted on 28 Aug 2023
532 points (94.9% liked)

Memes

45643 readers
1415 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
all 26 comments
sorted by: hot top controversial new old
[–] notst@lemmy.world 31 points 1 year ago (4 children)

How many fingers do humans have?

Image Generation AI: yes.

[–] AngryCommieKender@lemmy.world 7 points 1 year ago (2 children)

To be fair, humans have problems drawing hands. Lord knows that when I did art a ton of my people had their hands in pockets, the mailbox, purses, anything so I didn't have to draw a hand.

[–] Perfide@reddthat.com 3 points 1 year ago

Honestly, this. Like yeah AI is particularly bad at hands, but also it's a legit meme with artists that hands are fucking hard.

[–] AnarchistArtificer@slrpnk.net 2 points 1 year ago

Yeah, I find the "solidarity" with AI on this to be hilarious

[–] match@pawb.social 6 points 1 year ago

hands are hard for everyone to draw

[–] Int_not_found@feddit.de 29 points 1 year ago* (last edited 1 year ago) (3 children)

As a Data Scientist I can say: The present danger from AI isn't the the singularity. That's science fiction. It's the lack of comprehension what an AI is & the push to involve it more and more into certain decision making processes.

Current AIs are at there cores just statistical models, that assign probabilities to answers, based on previously observed data.

Governments and cooperations around the globe try to use these models to automate decisions. One massive problem here is the lack of transparency and human bias in the data.

For example, when a cooperation uses an AI to determine who should be fired. You get fired, you try to complain, but you just get the answer the maschine had a wide variety of input data & you should have worked harder.

We experienced in the past, that AIs focus on things we don't necessarily want them to focus on. In the example from above maybe your job performance was better then your colleges Dave, but you are a PoC and Dave is white. In the past PoCs were more prone to get fired, so the AI decided, that you are the most probable answer to the question 'Who should we fire?'.

If a human would have made the decision you could interview him and discover the underlying racism in this decision. Deciphering the decision of an AI is next to impossible.

So we slowly take away our ability to address wrongs in our burocratic processes, by cementing them into statistical models & thereby removing our ability to improve our societal values. AI has the potential to grind societies progress to a halt & drag easily fixable problems decades or centuries into the future.

[–] PsychedSy@sh.itjust.works 11 points 1 year ago

So you eliminate race as a possible input. Now it finds proxies. Non-standard name? Address? What holidays you take off? Maternity leave gaps can signal parenthood. Patterns of time off/FMLA can align with treatments. It's hard to choose relevant inputs without choosing revealing inputs.

[–] abraxas@lemmy.ml 9 points 1 year ago* (last edited 1 year ago)

The real-real issue is how many people don't bloody understand what AI/ML are, but are making huge decisions about where they are appropriate to use.

I can't count how many times I've heard "Let's add AI to this page!" was requested from non-tech execs in the last year, not knowing what does or doesn't work. Our most successful analytical report runs a simple 10-rule heuristic and nobody is the wiser.

So yeah, people trying to inject AI into hiring/firing. The people who did inject AI into predicting criminality. It all boils down to negligence, ignorance of your tools.

[–] Facebones@reddthat.com 7 points 1 year ago

Exactly. The "black box" nature of these systems should be a red flag for any practical usages.

[–] gerryflap@feddit.nl 22 points 1 year ago (3 children)

Who says it ends here? We've made tremendous progress in a short time. 10 years ago it was absolutely unthinkable that we'd be at a stage right now where we can generate these amazing images from text on consumer hardware and that AI can write text in a way that could totally fool humans. Even as someone working in the field I was fairly sceptical 5-6 years ago that we'd get here this fast

[–] teichflamme@lemm.ee 7 points 1 year ago

Agree 100%.

We are at the start and the progress is incredibly fast and accelerating.

Even the way that image generation alone has improved within the last year I wouldn't have believed.

[–] newIdentity@sh.itjust.works 2 points 1 year ago (1 children)

10 years ago? More like 5 years ago

[–] gerryflap@feddit.nl 2 points 1 year ago (1 children)

Yeah also indeed. Back then I was actually working with image generation and GANs and it was just starting to take off. A year later or something StyleGAN would absolutely blow my mind. Generating realistic 1024x1024 images while I was still bumbling about with a measly 64x64 pixels. But I didn't foresee where this was going even then

[–] newIdentity@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

Or remember when everyone was impressed with GauGAN in 2019/2020. We would've never guessed that just 2 or 3 years later we would have multiple competing available models.

Or when Dall-E mini had a hype last year and everyone was impressed with it?

Now there even are first experiments that make videos out of a prompt and they pretty much look exactly like the earlier iterations of diffusion models

[–] abraxas@lemmy.ml 1 points 1 year ago* (last edited 1 year ago) (1 children)

Where some underrepresented domains have made massive strides, this GPT thing has done relatively little for data science.

The important thing is that "what AI/ML can and cannot do" is not changing that much. It's successful application is what's changed. The idea of making AI libraries more accessible is huge, and leads to stuff like this. But under the hood, OpenAI doesn't do much different than other AI tools. It's just easier to use yourself. You can do more faster as computers get faster, but that seems to be limited with the endgame of Moore's Law anyway.

OpenAI runs on supercomputers now. It'll continue to run on supercomputers in the future. Instead of getting better, it has started to get worse at many things. Experts have always had a fairly good grasp of where it'll end. There are things AI was always expected to do better than humans at. And things it never will.

I mean, I expected AI image generation and better text quality. But I also expected the limits it currently has. And I've only done a little directly in the field.

[–] gerryflap@feddit.nl 1 points 1 year ago* (last edited 1 year ago)

But the fact that it can do so much is an awesome (and maybe scary) result in and of itself. These LLMs can write working code examples, write convincing stories, give advice, solve simple problems quite reliably, etc all from just learning to predict the next word. I feel like people are moving the goalpost way too quickly, focussing so much on the mistakes it makes instead of the impressive feats that have been achieved. Having AI doing all this was simply unthinkable a few years ago. And yes, OpenAI is currently using a lot of hardware, and ChatGPT might indeed have gotten worse. But none of that changes what has been achieved and how impressive it is.

Maybe it's because of all these overhyping clickbait articles that make reality seem disappointing. As someone in the field who's always been cynical about what would be possible, I just can't be anything else then impressed with the rate of progress. I was wrong with my predictions 5 years ago, and who knows where we'll go next.

[–] JohnDClay@sh.itjust.works 13 points 1 year ago (1 children)

I mean a hallucinating defecating AI that enslaves us all is also pretty scary.

[–] Static_Rocket@lemmy.world 3 points 1 year ago

If a hallucinating AI with marginal direction enslaves us all, we deserved it.

[–] Pons_Aelius@kbin.social 12 points 1 year ago

Chat-GPT (and other LLMs) is as self-aware as a TI-83.

[–] Dirk@lemmy.ml 11 points 1 year ago

Kenya not bully AI please?

[–] BetaDoggo_@lemmy.world 4 points 1 year ago

The issue is the marketing. If they only marketed language models for the things they are able to be trusted with, summarization, cleaning text, writing assistance, entertainment, etc. there wouldn't be nearly as much debate.

The creators of the image generation models have done a much better job of this, partially because the limitations can be seen visually, rather than requiring a fact check on every generation. They also aren't claiming that they're going to revolutionize all of scociety, which helps.

[–] hellfire103@sopuli.xyz 2 points 1 year ago

Pretty sure that's acid you're talking about.