this post was submitted on 17 Aug 2023
483 points (96.0% liked)

Technology

59605 readers
3409 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

cross-posted from: https://nom.mom/post/121481

OpenAI could be fined up to $150,000 for each piece of infringing content.https://arstechnica.com/tech-policy/2023/08/report-potential-nyt-lawsuit-could-force-openai-to-wipe-chatgpt-and-start-over/#comments

top 50 comments
sorted by: hot top controversial new old
[–] BURN@lemmy.world 186 points 1 year ago (16 children)

Good

AI should not be given free reign to train on anything and everything we’ve ever created. Copyright holders should be able to decide if their works are allowed to be used for model training, especially commercial model training. We’re not going to stop a hobbyist, but google/Microsoft/openAI should be paying for materials they’re using and compensating the creators.

[–] SatanicNotMessianic@lemmy.ml 122 points 1 year ago* (last edited 1 year ago) (5 children)

While that’s understandable, I think it’s important to recognize that this is something where we’re going to have to treat pretty carefully.

If a human wants to become a writer, we tell them to read. If you want to write science fiction, you should both study the craft of writing ranging from plots and storylines to character development to Stephen King’s advice on avoiding adverbs. You also have to read science fiction so you know what has been done, how the genre handles storytelling, what is allowed versus shunned, and how the genre evolved and where it’s going. The point is not to write exactly like Heinlein (god forbid), but to throw Heinlein into the mix with other classic and contemporary authors.

Likewise, if you want to study fine art, you do so by studying other artists. You learn about composition, perspective, and color by studying works of other artists. You study art history, broken down geographically and by period. You study DaVinci’s subtle use of shading and Mondrian’s bold colors and geometry. Art students will sit in museums for hours reproducing paintings or working from photographs.

Generative AI is similar. Being software (and at a fairly early stage at that), it’s both more naive and in some ways more powerful than human artists. Once trained, it can crank out a hundred paintings or short stories per hour, but some of the people will have 14 fingers and the stories might be formulaic and dull. AI art is always better when glanced at on your phone than when looked at in detail on a big screen.

In both the cases of human learners and generative AI, a neural network(-like) structure is being conditioned to associate weights between concepts, whether it’s how to paint a picture or how to create one by using 1000 words.

A friend of mine who was an attorney used to say “bad facts make bad law.” It means that misinterpretation, over-generalization, politicization, and a sense of urgency can make for both bad legislation and bad court decisions. That’s especially true when the legislators and courts aren’t well educated in the subjects they’re asked to judge.

In a sense, it’s a new technology that we don’t fully understand - and by “we” I’m including the researchers. It’s theoretically and in some ways mechanically grounded in old technology that we also don’t understand - biological neural networks and complex adaptive systems.

We wouldn’t object to a journalism student reading articles online to learn how to write like a reporter, and we rightfully feel anger over the situation of someone like Aaron Swartz. As a scientist, I want my papers read by as many people as possible. I’ve paid thousands of dollars per paper to make sure they’re freely available and not stuck behind a paywall. On the other hand, I was paid while writing those papers. I am not paid for the paper, but writing the paper was part of my job.

I realize that is a case of the copyright holder (me) opening up my work to whoever wants a copy. On the other other hand, we would find it strange if an author forbade their work being read by someone who wants to learn from it, even if they want to learn how to write. We live in a time where technology makes things like DRM possible, which attempts to make it difficult or impossible to create a copy of that work. We live in societies that will send people to prison for copying literal bits of information without a license to do so. You can play a game, and you can make a similar game. You can play a thousand games, and make one that blends different elements of all of them. But if you violate IP, you can be sued.

I think that’s what it comes down to. We need to figure out what constitutes intellectual property and what rights go with it. What constitutes cultural property, and what rights do people have to works made available for reading or viewing? It’s easy to say that a company shouldn’t be able to hack open a paywall to get at WSJ content, but does that also go for people posting open access to Medium?

I don’t have the answers, and I do want people treated fairly. I recognize the tremendous potential for abuse of LLMs in generating viral propaganda, and I recognize that in another generation they may start making a real impact on the economy in terms of dislocating people. I’m not against legislation. I don’t expect the industry to regulate itself, because that’s not how the world works. I’d just like for it to be done deliberately and realistically and with the understanding that we’re not going to get it right and will have to keep tuning the laws as the technology and our understanding continue to evolve.

[–] hypnotoad__@lemmy.world 21 points 1 year ago

Sorry this is a bit too level-headed for me, can you please repeat with a bullhorn, and use 4-letter words instead? I need to know who to blame here.

[–] chaircat@lemdro.id 14 points 1 year ago

This is an astonishingly well written, nuanced, and level headed response. Really on a level I'm not used to seeing on this platform.

load more comments (3 replies)
[–] Swervish@lemmy.ml 61 points 1 year ago* (last edited 1 year ago) (6 children)

Not trying to argue or troll, but I really don't get this take, maybe I'm just naive though.

Like yea, fuck Big Data, but...

Humans do this naturally, we consume data, we copy data, sometimes for profit. When a program does it, people freak out?

edit well fuck me for taking 10 minutes to write my comment, seems this was already said and covered as I was typing mine lol

[–] QHC@lemmy.world 16 points 1 year ago (1 children)

It's just a natural extension of the concept that entities have some kind of ownership of their creation and thus some say over how it's used. We already do this for humans and human-based organizations, so why would a program not need to follow the same rules?

[–] FaceDeer@kbin.social 28 points 1 year ago (1 children)

Because we don't already do this. In fact, the raw knowledge contained in a copyrighted work is explicitly not copyrighted and can be done with as people please. Only the specific expression of that knowledge can be copyrighted.

An AI model doesn't contain the copyrighted works that went into training it. It only contains the concepts that were learned from it.

[–] BURN@lemmy.world 6 points 1 year ago (2 children)

There’s no learning of concepts. That’s why models hallucinate so frequently. They don’t “know” anything, they’re doing a lot of math based on what they’ve seen before and essentially taking the best guess at what the next word is.

There very much is learning of concepts. This is completely provable. You can give it problems it has never seen before and it will come up with good solutions.

[–] SIGSEGV@sh.itjust.works 8 points 1 year ago (9 children)

Very much like humans do. Many people think that somehow their brain is special, but really, you're just neurons behaving as neurons do, which can be modeled mathematically.

load more comments (9 replies)
load more comments (5 replies)
[–] lily33@lemm.ee 40 points 1 year ago* (last edited 1 year ago) (11 children)

No.

  • A pen manufacturer should not be able to decide what people can and can't write with their pens.
  • A computer manufacturer should not be able to limit how people use their computers (I know they do - especially on phones and consoles - and seem to want to do this to PCs too now - but they shouldn't).
  • In that exact same vein, writers should not be able to tell people what they can use the books they purchased for.

.

We 100% need to ensure that automation and AI benefits everyone, not a few select companies. But copyright is totally the wrong mechanism for that.

[–] BURN@lemmy.world 35 points 1 year ago (1 children)

A pen is not a creative work. A creative work is much different than something that’s mass produced.

Nobody is limiting how people can use their pc. This would be regulations targeted at commercial use and monetization.

Writers can already do that. Commercial licensing is a thing.

[–] lily33@lemm.ee 11 points 1 year ago (6 children)

Nobody is limiting how people can use their pc. This would be regulations targeted at commercial use and monetization.

... Google's proposed Web Integrity API seems like a move in that direction to me.

But that's besides the point, I was trying to establish the principle that people who make things shouldn't be able to impose limitations on how these things are used later on.

A pen is not a creative work. A creative work is much different than something that’s mass produced.

Why should that difference matter, in particular when it comes to the principle I mentioned?

[–] Rottcodd@kbin.social 10 points 1 year ago

Why should that difference matter, in particular when it comes to the principle I mentioned?

Because creative works are rather obviously fundamentally different from physical objects, in spite of a number of shared qualities.

Like physical objects, they can be distinguished one from another - the text of Moby Dick is notably different from the text of Waiting for Godot, for instance

More to the point, like physical objects, they're products of applied labor - the text of Moby Dick exists only because Herman Melville labored to bring it into existence.

However, they're notably different from physical objects insofar as they're quite simply NOT physical objects. The text of Moby Dick - the thing that Melville labored to create - really exists only conceptually. It's of course presented in a physical form - generally as a printed book - but that physical form is not really the thing under consideration, and more importantly, the thing to which copyright law applies (or in the case of Moby Dick, used to apply). The thing under consideration is more fundamental than that - the original composition.

And, bluntly, that distinction matters and has to be stipulated because selectively ignoring it in order to equivocate on the concept of rightful property is central to the NoIP position, as illustrated by your inaccurate comparison to a pen.

Nobody is trying to control the use of pens (or computers, as they were being compared to). The dispute is over the use of original compositions - compositions that are at least arguably, and certainly under the law, somebody else's property.

[–] walrusintraining@lemmy.world 6 points 1 year ago* (last edited 1 year ago) (7 children)

It’s not like AI is using works to create something new. Chatgpt is similar to if someone were to buy 10 copies of different books, put them into 1 book as a collection of stories, then mass produce and sell the “new” book. It’s the same thing but much more convoluted.

Edit: to reply to your main point, people who make things should absolutely be able to impose limitations on how they are used. That’s what copyright is. Someone else made a song, can you freely use that song in your movie since you listened to it once? Not without their permission. You wrote a book, can I buy a copy and then use it to make more copies and sell? Not without your permission.

[–] PupBiru@kbin.social 6 points 1 year ago (5 children)

it’s not even close to that black and white… i’d say it’s a much more grey area:

possibly that you buy a bunch of books by the same author and emulate their style… that’s perfectly acceptable until you start using their characters

if you wrote a research paper about the linguistic and statistical information that makes an authors style, that also wouldn’t be a problem

so there’s something beyond just the authors “style” that they think is being infringed. we need to sort out exactly where the line is. what’s the extension to these 2 ideas that makes training an LLM a problem?

load more comments (5 replies)
load more comments (6 replies)
load more comments (4 replies)
[–] fkn@lemmy.world 9 points 1 year ago (5 children)

You made two arguments for why they shouldn't be able to train on the work for free and then said that they can with the third?

Did openai pay for the material? If not, then it's illegal.

Additionally, copywrite and trademarks and patents are about reproduction, not use.

If you bought a pen that was patented, then made a copy of the pen and sold it as yours, that's illegal. This is the analogy of what openai is going with books.

Plagiarism and reproduction of text is the part that is illegal. If you take the "ai" part out, what openai is doing is blatantly illegal.

[–] lily33@lemm.ee 6 points 1 year ago* (last edited 1 year ago) (8 children)

Just now, I tried to get Llama-2 (I'm not using OpenAI's stuff cause they're not open) to reproduce the first few paragraphs of Harry Potter and the philosophers' stone, and it didn't work at all. It created something vaguely resembling it, but with lots of made-up stuff that doesn't make much sense. I certainly can't use it to read the book or pirate it.

load more comments (8 replies)
load more comments (4 replies)
[–] DarkWasp@lemmy.world 9 points 1 year ago* (last edited 1 year ago) (1 children)

All of the examples you listed have nothing to do with how OpenAI was created and set up. It was trained on copyrighted work, how is that remotely comparable to purchasing a pen?

[–] Moobythegoldensock@lemm.ee 8 points 1 year ago (1 children)

Would a more apt comparison be a band posting royalties to all of their influences?

load more comments (1 replies)
load more comments (8 replies)
[–] coheedcollapse@lemmy.world 23 points 1 year ago* (last edited 1 year ago)

With that mindset, only the powerful will have access to these models.

Places like Reddit, Google, Facebook, etc, places that can rope you into giving away rights to your data with TOS stipulations.

Locking down everything available on the Internet by piling more bullshit onto already draconian copyright rules isn't the answer and it surprises the shit out of me how quickly fellow artists, writers, and creatives piled onto the side with Disney, the RIAA, and other former enemies the second they started perceiving ML as a threat to their livelihood.

I do believe restrictions should be looked into when it comes to large organizations and industries replacing creators with ML, but attacking open ML models directly is going to result in the common folk losing access to the tools and corporations continuing to work exactly as they are right now by paying for access to locked-down ML based on content from companies who trade in huge amounts of data.

Not to mention it's going to give the giants who have been leveraging their copyright powers against just about everyone on the internet more power to do just that. That's the last thing we need.

[–] Falmarri@lemmy.world 14 points 1 year ago (2 children)

What's the basis for this? Why can a human read a thing and base their knowledge on it, but not a machine?

[–] BURN@lemmy.world 17 points 1 year ago (10 children)

Because a human understands and transforms the work. The machine runs statistical analysis and regurgitates a mix of what it was given. There’s no understanding or transformation, it’s just what is statistically the 3rd most correct word that comes next. Humans add to the work, LLMs don’t.

Machines do not learn. LLMs do not “know” anything. They make guesses based on their inputs. The reason they appear to be so right is the scale of data they’re trained on.

This is going to become a crazy copyright battle that will likely lead to the entirety of copyright law being rewritten.

load more comments (10 replies)
[–] gcheliotis@lemmy.world 10 points 1 year ago* (last edited 1 year ago) (1 children)

That machine is a commercial product. Quite unlike a human being, in essence, purpose and function. So I do not think the comparison is valid here unless it were perhaps a sentient artificial being, free to act of its own accord. But that is not what we’re talking about here. We must not be carried away by our imaginations, these language models are (often proprietary and for profit) products.

[–] Falmarri@lemmy.world 7 points 1 year ago (2 children)

I don't see how that's relevant. A company can pay someone to read copyrighted work, learn from it, and then perform a task for the benefit of the company related to the learning.

load more comments (2 replies)
[–] ArmokGoB@lemmy.dbzer0.com 13 points 1 year ago (4 children)

I disagree. I think that there should be zero regulation of the datasets as long as the produced content is noticeably derivative, in the same way that humans can produce derivative works using other tools.

load more comments (4 replies)
[–] Hangglide@lemmy.world 13 points 1 year ago (9 children)

Bullshit. If I learn engineering from a textbook, or a website, and then go on to design a cool new widget that makes millions, the copyright holder of the textbook or website should get zero dollars from me.

It should be no different for an AI.

load more comments (9 replies)
[–] TheDarkKnight@lemmy.world 11 points 1 year ago (3 children)

I understand the sentiment (and agree on moral grounds) but I hink this would put us at an extreme disadvantage in the development of this technology compared to competing nations. Unless you can get all countries to agree and somehow enforce this I think it dramatically hinders our ability to push forward in this space.

load more comments (3 replies)
[–] makyo@lemmy.world 11 points 1 year ago (1 children)

I think any LLM should be required to be free to use. They can pay for extra bells and whistles like document upload but the core model must be free. They're free to make their billions, but it shouldn't be on a model built by scraping all the information of humanity for free.

load more comments (1 replies)
load more comments (7 replies)
[–] DavyJones@lemmy.dbzer0.com 127 points 1 year ago (4 children)

When OpenAI commits copyright infringement no one bats an eye, but when I do it everyone downvotes me

[–] DrM@feddit.de 60 points 1 year ago (3 children)

Yeah I don't get it. ChatGPT is not "Fair use" and there is no credit given to anyone, it's a solid case against them

[–] makyo@lemmy.world 19 points 1 year ago (2 children)

I just wonder if they'll get out of it because LLMs do reword the information instead of spitting it back out verbatim. It's the same reason I think the image generators are safe from copyright law - it's just different enough that they could plausibly convince a judge with a fair use argument.

What bothers me even more is all the text they had to scrape to create ChatGPT... That seems like a novel problem for the legal system because you know there's no way they paid for all of it.

[–] DrM@feddit.de 8 points 1 year ago* (last edited 1 year ago) (2 children)

It doesn't matter. For it to be fair use under American law they would need to give full credit, which they obviously don't.

load more comments (2 replies)
load more comments (1 replies)
[–] FiskFisk33@startrek.website 6 points 1 year ago

I'm not 100% sure where I stand but, for arguments sake; Are you sure about that? it sure is transformative!

load more comments (1 replies)
[–] troyunrau@lemmy.ca 17 points 1 year ago

Classic joke, something like: if you owe the bank $100, it's your problem; if you owe them a million, it's their problem.

[–] ArmokGoB@lemmy.dbzer0.com 6 points 1 year ago

Only on lemmy.world

load more comments (1 replies)
[–] Steeve@lemmy.ca 69 points 1 year ago

I'll take things that won't happen for $200

[–] errer@lemmy.world 41 points 1 year ago (4 children)

Could this headline possibly have any more weasel word qualifiers? Lots of things “could” happen.

[–] BURN@lemmy.world 7 points 1 year ago

Modern media is full of it so they can’t be sued for anything. And also because there’s probably little fact in at least a few parts of the article so they’re posting it as “speculative”

load more comments (3 replies)
[–] Silverseren@kbin.social 35 points 1 year ago

Too many people have copies of the full database at this point for such a thing to mean much. And there's too many other versions that have been produced since to matter either.

[–] Stinkywinks@lemmy.world 22 points 1 year ago (1 children)

Oh God, I better not learn anything from a book or I'm fucked.

Fuck man I’ve watched sooooo many movies… the MPAA is gonna be on my ass…

[–] nutsack@lemmy.world 15 points 1 year ago* (last edited 1 year ago) (2 children)

can't they just move to a country where copyright doesn't exist?

[–] golli@lemm.ee 18 points 1 year ago (1 children)

They could, but presumably they want to make business and sell their products in countries that do have those copyright protections or to other companies from there.

load more comments (1 replies)
load more comments (1 replies)
[–] autotldr@lemmings.world 8 points 1 year ago (1 children)

This is the best summary I could come up with:


The result, experts speculate, could be devastating to OpenAI, including the destruction of ChatGPT's dataset and fines up to $150,000 per infringing piece of content.

If the Times were to follow through and sue ChatGPT-maker OpenAI, NPR suggested that the lawsuit could become "the most high-profile" legal battle yet over copyright protection since ChatGPT's explosively popular launch.

This speculation comes a month after Sarah Silverman joined other popular authors suing OpenAI over similar concerns, seeking to protect the copyright of their books.

As of this month, the Times' TOS prohibits any use of its content for "the development of any software program, including, but not limited to, training a machine learning or artificial intelligence (AI) system."

In the memo, the Times' chief product officer, Alex Hardiman, and deputy managing editor Sam Dolnick said a top "fear" for the company was "protecting our rights" against generative AI tools.

the memo asked, echoing a question being raised in newsrooms that are beginning to weigh the benefits and risks of generative AI.


I'm a bot and I'm open source!

[–] wildcelt@lemmy.world 12 points 1 year ago
load more comments
view more: next ›