this post was submitted on 09 Oct 2023
421 points (95.9% liked)
Technology
59243 readers
3315 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Serious question - why should anyone care about using AI to make 9/11 memes? Boobs I can see the potential argument against at least (deep fakes and whatnot), but bad taste jokes?
Are these image generation companies actually concerned they'll be sued because someone used their platform to make an image in bad taste? Even if such a thing we're possible, wouldn't the responsibility be on the person who made it? Or at worst the platform that distributed the images -As opposed to the one that privately made it?
I don't see adobe trying to stop people from making 911 memes in photoshop nor have they been sued over anything like that, I dont get why AI should be different. It's just a tool.
That's a great analogy, wish I'd thought of it
I guess it comes down to whether the courts decide to view AI as a tool like photoshop, or a service - like an art commission. I think it should be the former, but I wouldn't be at all surprised if the dinosaurs in the US gov think it's the latter
The problem for Adobe is that the AI work is being done on their computers, not yours, so it could be argued that they are liable for generated content. 'Could' because it's far from established but you can imagine how nervous this all must make their lawyers.
Protect the brand. That's it.
Microsoft doesn't want non-PC stuff being associated with the Bing brand.
It's what a ton of the 'safety' alignment work is about.
This generation of models doesn't pose any actual threat of hostile actions. The "GPT-4 lied and said it was human to try to buy chemical weapons" in the safety paper at release was comical if you read the full transcript.
But they pose a great deal of risk to brand control.
Yet still apparently not enough to run results through additional passes which fixes 99% of all these issues, just at 2-3x the cost.
It's part of why articles like these are ridiculous. It's broadly a solved problem, it's just the cost/benefit of the solution isn't enough to justify it because (a) these issues are low impact and don't really matter for 98% of the audience, and (b) the robust fix is way more costly than the low hanging fruit chatbot applications can justify.
You mean bing, the porn Google? Yeah, that might be a tad too late
I’d guess that they are worried the IP owners will sue them for singing their IP.
So sonic creators will say, your profiting by using sonic and not paying us for the right to use him.
But I agree that deep fakes can be pretty bad.
You are profiting = you're profiting.