this post was submitted on 04 Sep 2023
14 points (88.9% liked)

AI

4151 readers
3 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
 

Increasingly, the authors of works being used to train large language models are complaining (and rightfully so) that they never gave permission for such a use-case. If I were an LLM company, I'd be seriously looking for a Plan B right now, whether that's engaging publishing companies to come up with new licensing options, paying 1,000,000 grad students to write 1,000,000 lines of prose, or something else entirely.

top 9 comments
sorted by: hot top controversial new old
[–] j4k3@lemmy.world 8 points 1 year ago (1 children)

Non issue political BS. AI has no more than the capabilities of a human that half ass read the cliffnotes on any book. It is a similar awareness as anyone that knows about the work and the basic writing style. Complaining about this is as stupid as thought policing people for being aware of a book and its content without paying for it. Fucking joke media is terrible for this yellow news. I'm actually playing with open source offline AI models. I've tried training one on a book. The results are useless.

The main motivation in all of the garbage hype media is a propaganda campaign to limit AI to the proprietary privacy invasive garbage. The open source models are an existential threat. There is no going back now. This is like the early days of the proprietary internet framework. Everyone involved in that went out of business when the open source options became available. AI LLMs are as big of a change as the entire internet. For example, you want a search engine that works? A llama2 70B is far better at responding with what you are actually looking for than any current search engine. This makes stalkerware big tech obsolete.

[–] Peanutbjelly@sopuli.xyz 4 points 1 year ago

Keep saying the same about diffusion models as well. I guess we just want adobe and other wealthy companies to be the only ones with access to proprietary datasets large enough to make futuristic art tools.

Pay subscriptions to your overlords or suffer.

[–] FaceDeer@kbin.social 2 points 1 year ago (2 children)

Or move to a country with more permissive IP laws to do your AI work.

[–] will_a113@lemmy.ml 1 points 1 year ago (1 children)

It's hard to trade with the rest of the world when you're not a party to the Berne Convention

[–] FaceDeer@kbin.social 1 points 1 year ago (1 children)

The Berne Convention contains an enumerated list of things that it recognizes as things that can be restricted by IP law. Training AIs is not among them.

[–] will_a113@lemmy.ml 1 points 1 year ago (1 children)

Derivative works is though - and the cases slowly plodding through the court system right now are going to demand a decision on whether an LLM or its creations count as derivative works.

[–] FaceDeer@kbin.social 4 points 1 year ago (1 children)

For it to be a derivative work you're going to have to prove that the model contains a substantial portion of the material it's supposedly a derivative work of. Good luck with that, neural nets simply don't work that way.

[–] will_a113@lemmy.ml 1 points 1 year ago

That's not really true, though. The biggest reason why these cases were able to get traction was because when prompted a certain, specific way, researchers were able to reproduce substantial portions of copyrighted works - https://arstechnica.com/tech-policy/2023/08/openai-disputes-authors-claims-that-every-chatgpt-response-is-a-derivative-work/

[–] doinks@discuss.online 1 points 1 year ago

So many people urging policymakers to kneecap AI development and cede all that progress to China