this post was submitted on 22 Dec 2024
1419 points (97.4% liked)

Technology

60052 readers
3730 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

It's all made from our data, anyway, so it should be ours to use as we want

you are viewing a single comment's thread
view the rest of the comments
[–] Grimy@lemmy.world 5 points 21 hours ago* (last edited 20 hours ago) (1 children)

If we can't train on unlicensed data, there is no open-source scene. Even worse, AI stays but it becomes a monopoly in the hands of the few who can pay for the data.

Most of that data is owned and aggregated by entities such as record labels, Hollywood, Instagram, reddit, Getty, etc.

The field would still remain hyper competitive for artists and other trades that are affected by AI. It would only cause all the new AI based tools to be behind expensive censored subscription models owned by either Microsoft or Google.

I think forcing all models trained on unlicensed data to be open source is a great idea but actually rooting for civil lawsuits which essentially entail a huge broadening of copyright laws is simply foolhardy imo.

[–] just_another_person@lemmy.world 0 points 20 hours ago (1 children)

Unlicensed from the POV of the trainer, meaning they didn't contact or license content from someone who didn't approve. If it's posted under Creative Commons, that's fine. If it's otherwise posted that it's not open in any other way and not for corporate use, then they need to contact the owner and license it.

[–] Grimy@lemmy.world 2 points 20 hours ago* (last edited 20 hours ago) (1 children)

They won't need to, they will get it from Getty. All these websites have a ToS that make it very clear they can do whatever they want with what you upload. The courts will simply never side with the small time photographer who makes 50$ a month with his stock photos hosted on someone else's website. The laws will be in favor of databrokers and the handful of big AI companies.

Anyone self hosting will simply not get a call. Journalists will keep the same salary while the newspaper's owner gets a fat bonus. Even Reddit already sold it's data for 60 million and none of that went anywhere but spezs coke fund.

[–] just_another_person@lemmy.world -2 points 20 hours ago (1 children)

Two things:

  1. Getty is not expressly licensed as "free to use", and by default is not licensed for commercial anything. That's how they are a business that is still alive.

  2. You're talking about Generative AI junk and not LLMs which this discussion and the original post is about. They are not the same thing.

[–] Grimy@lemmy.world 4 points 20 hours ago* (last edited 19 hours ago) (1 children)

Reddit and newspapers selling their data preemptively has to do with LLMs. Can you clarify what scenario you are aiming for? It sounds like you want the courts to rule that AI companies need to ask each individual redditor if they can use his comments for training. I don't see this happening personally.

Getty gives itself the right to license all photos uploaded and already trained a generative model on those btw.

[–] just_another_person@lemmy.world -1 points 19 hours ago

EULA and TOS agreements stop Reddit and similar sites from being sued. They changed them before they were selling the data and barely gave notice about it (see the exodus from reddit pt2), but if you keep using the service, you agree to both, and they can get away with it because they own the platform.

Anyone who has their content on a platform of the like that got the rug pulled out from under them with silent amendments being made to allow that is unfortunately fucked.

Any other platforms that didn't explicitly state this was happening is not in scope to just allow these training tools to grab and train. What we know is that OpenAI at the very least was training on public sites that didn't explicitly allow this. Personal blogs, Wikipedia...etc.