this post was submitted on 11 Jul 2023
64 points (100.0% liked)

Technology

37716 readers
305 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

A long form response to the concerns and comments and general principles many people had in the post about authors suing companies creating LLMs.

you are viewing a single comment's thread
view the rest of the comments
[–] Spudger@lemmy.sdf.org 14 points 1 year ago (9 children)

I don't know what the authors are complaining about. All the AI is doing is trawling through a lexicon of words and rearranging them into an order that will sell books. It's exactly what authors do. This is about money.

[–] ag_roberston_author@beehaw.org 37 points 1 year ago (2 children)

Hi, it's me the author!

First of all, thanks for reading.

In the article I explain that it is not exactly what authors do, we reading and writing are an inherently human activity and the consumption and processing of massive amounts of data (far more than a human with a photographic memory could process in a hundred million lifetimes) is a completely different process to that.

I also point out that I don't have a problem with LLMs as a concept, and I'm actually excited about what they can do, but that they are inherently different from humans and should be treated as such by the law.

My main point is that authors should have the ability to decree that they don't want their work used as training data for megacorporations to profit from without their consent.

So, yes in a way it is about money, but the money in question being the money OpenAI and Meta are making off the backs of millions of unpaid and often unsuspecting people.

[–] triprotic@beehaw.org 10 points 1 year ago (1 children)

I think it's an interesting topic, thanks for the article.

It does start to raise some interesting questions, if an author doesn't want they book to be ingested by a LLM, then what is acceptable? Should all LLMs now be ignorant of that work? What about summaries or reviews of that work?

What if from a summary of a book an LLM could extrapolate what's in the book? Or write a similar book to the original, does that become a new work or is it still fall into the issue of copyright?

I do fear that copyright laws will muddy the waters and slow down the development of LLMs and have a greater impact more than any government standards ever will!

[–] baconbrand@beehaw.org 11 points 1 year ago

I'm all for muddy waters and slow development of LLMs at this juncture. The world is enough of a capitalist horrorshow and so far all this tech provides is a faster way to accelerate the already ridiculously wide class divide. Just my cynical luddite take of the day...

load more comments (6 replies)