this post was submitted on 27 Jul 2023
33 points (100.0% liked)

Technology

37738 readers
452 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 13 comments
sorted by: hot top controversial new old
[–] MiddledAgedGuy@beehaw.org 20 points 1 year ago* (last edited 1 year ago)

The nuances of this are beyond me, but I feel like it's likely this would eventually be overcome.

Your best protection from having your images manipulated is to keep them off the internet if at all possible (we can only control what we post).

Basically I'm saying to treat the internet as hostile. Which is unfortunate, but accurate.

[–] drwho@beehaw.org 12 points 1 year ago (2 children)

This'll work for about a week. The image generation applications' devs will get hold of some of the inner workings and figure out how to make their networks detect and ignore those minor manipulations. It'll turn into yet another arms race with no winner or loser, just rapid, incremental changes.

[–] frog@beehaw.org 7 points 1 year ago (1 children)

It might become a bit harder for the big AI companies to claim they're legitimate, trustworthy companies if there's evidence they're actively trying to get around what it effectively a form of DRM on images, though. They wouldn't just be scraping copyrighted works from the internet, they'd be designing their AIs to work around protections put on those images.

So it wouldn't stop bad actors, the same way that copyright protections and DRM in other forms of media don't stop piracy, but when a big company is doing it, they lose all the "we're not evil" credibility they're trying to create in the media right now.

[–] drwho@beehaw.org 1 points 1 year ago (1 children)

I think the best PR strategy there would be to just say nothing. Just like they did when they were mirroring folks' sites to collect images and text to train their models on. By the time folks realized that their stuff had been used for training without their consent it was far too late, and here we are.

In other words, it won't stop AI companies because they're already bad actors, and they're acting just like bad actors do anyway. Even "we're not evil" isn't even a thing they bother saying anymore, because everybody already knows it's only meaningless mouth noises (case in point, the Big G).

[–] frog@beehaw.org 2 points 1 year ago

I think they got away with a lot of stuff because they were largely operating outside of public awareness. I fully agree that they're bad actors, but they're definitely trying to pull the "we're not evil" thing. Nobody who keeps up with technology news believes it, but a lot of the general public seem to.

What I think these tools achieve, mostly, is allowing artists to make it very explicit that they don't give consent for their work to be used. It takes it out of the grey area of "anything on the internet can be used", because when an artist runs their photos/artworks through tools like this, that's a deliberate act to deny consent. It won't stop the AI developers from trying to get around it (and there's going to be an arms race between them and the developers of the tools), but it cuts off any "consent is assumed by posting it online" arguments (which were weak to start with, but still). Honestly, these tools are a short-term measure until the law catches up. And it seems very unlikely that the concept of copyright is going to be abolished for AIs.

[–] lloram239@feddit.de 2 points 1 year ago* (last edited 1 year ago)

This won't even work for a week, as when you manipulate a picture with AI it is completely irrelevant to the AI what is in the picture. The AI generates a completely new image and copies a region of the old image over, like good old Photoshop. All the AI need is a bit of guidance so the new image and the original line up, but that's easy to produce, can be done by numerous methods or even manually in seconds if absolutely necessary.

Didn't even notice that anything was different when manipulating the example from the article and it took me a whole 5sec.

If you want to stop manipulations, we already have tools for that: Just put a cryptographic signature on all the stuff you publish. That won't stop manipulations, but it makes it very easy to see that an image was manipulated. That said, the tools have been around for 25 years and never gained any traction, people just don't care that much.

[–] SIGSEGV@waveform.social 10 points 1 year ago (1 children)

This would be so easily bypassed that the whole concept, while technologically cool, is completely useless. Someone could apply a very subtle smoothing filter and then do whatever they wanted with it after via stable diffusion (or use the myriad of alternative methods to remove the "protection").

[–] drwho@beehaw.org 2 points 1 year ago

Or, just keep synthesizing the images from scratch and compositing them.

[–] CreativeTensors@beehaw.org 4 points 1 year ago* (last edited 1 year ago)

What happens if you take a picture of the screen with your phone? Not ideal in terms of quality but I'm willing to bet it would work as a stupid easy bypass.

What I'd like to see is an image format that is digitally signed by default so that modifications can be flagged and the source can be verified. Yeah people could modify them for malicious use cases (I don't think it will be possible to ever prevent that unfortunately) but at least we'd know that it wasn't the original. This doesn't work for cases where people want to be anonymous but for things like social media selfies and posts where people are visible it could be helpful in preventing people spreading deepfakes and claiming they're real.

[–] Keesrif@beehaw.org 2 points 1 year ago (1 children)

I'm not well versed in this at all, but would this also work if the "attacker" were to take a screenshot of the image they wanted to alter, and plug that into an AI tool? It sounds more like metadata tweaking from the article, which would be bypassed by a screenshot.

[–] frog@beehaw.org 5 points 1 year ago (1 children)

There's another tool called Glaze that does this for artwork, and what that one does is selectively edits individual pixels in a way that makes the artwork look normal to a human, but is rendered as a distorted mess by AIs. Because it's pixels within the image being edited, not the metadata, a screenshot preserves the edits. I think it's also resistant to blending and smoothing tools, because what the AI reads is different to what the human eye sees, and blending and smoothing tools are designed with a human eye in mind.

[–] Keesrif@beehaw.org 2 points 1 year ago

Interesting, thanks for the insight!

[–] Thalestr@beehaw.org 2 points 1 year ago

This is promising. Glaze was simply too intrusive for most people to be willing to use it regularly.

load more comments
view more: next ›