this post was submitted on 27 Jul 2023
33 points (100.0% liked)

Technology

37738 readers
452 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] drwho@beehaw.org 12 points 1 year ago (2 children)

This'll work for about a week. The image generation applications' devs will get hold of some of the inner workings and figure out how to make their networks detect and ignore those minor manipulations. It'll turn into yet another arms race with no winner or loser, just rapid, incremental changes.

[–] frog@beehaw.org 7 points 1 year ago (1 children)

It might become a bit harder for the big AI companies to claim they're legitimate, trustworthy companies if there's evidence they're actively trying to get around what it effectively a form of DRM on images, though. They wouldn't just be scraping copyrighted works from the internet, they'd be designing their AIs to work around protections put on those images.

So it wouldn't stop bad actors, the same way that copyright protections and DRM in other forms of media don't stop piracy, but when a big company is doing it, they lose all the "we're not evil" credibility they're trying to create in the media right now.

[–] drwho@beehaw.org 1 points 1 year ago (1 children)

I think the best PR strategy there would be to just say nothing. Just like they did when they were mirroring folks' sites to collect images and text to train their models on. By the time folks realized that their stuff had been used for training without their consent it was far too late, and here we are.

In other words, it won't stop AI companies because they're already bad actors, and they're acting just like bad actors do anyway. Even "we're not evil" isn't even a thing they bother saying anymore, because everybody already knows it's only meaningless mouth noises (case in point, the Big G).

[–] frog@beehaw.org 2 points 1 year ago

I think they got away with a lot of stuff because they were largely operating outside of public awareness. I fully agree that they're bad actors, but they're definitely trying to pull the "we're not evil" thing. Nobody who keeps up with technology news believes it, but a lot of the general public seem to.

What I think these tools achieve, mostly, is allowing artists to make it very explicit that they don't give consent for their work to be used. It takes it out of the grey area of "anything on the internet can be used", because when an artist runs their photos/artworks through tools like this, that's a deliberate act to deny consent. It won't stop the AI developers from trying to get around it (and there's going to be an arms race between them and the developers of the tools), but it cuts off any "consent is assumed by posting it online" arguments (which were weak to start with, but still). Honestly, these tools are a short-term measure until the law catches up. And it seems very unlikely that the concept of copyright is going to be abolished for AIs.

[–] lloram239@feddit.de 2 points 1 year ago* (last edited 1 year ago)

This won't even work for a week, as when you manipulate a picture with AI it is completely irrelevant to the AI what is in the picture. The AI generates a completely new image and copies a region of the old image over, like good old Photoshop. All the AI need is a bit of guidance so the new image and the original line up, but that's easy to produce, can be done by numerous methods or even manually in seconds if absolutely necessary.

Didn't even notice that anything was different when manipulating the example from the article and it took me a whole 5sec.

If you want to stop manipulations, we already have tools for that: Just put a cryptographic signature on all the stuff you publish. That won't stop manipulations, but it makes it very easy to see that an image was manipulated. That said, the tools have been around for 25 years and never gained any traction, people just don't care that much.