this post was submitted on 10 Jul 2023
164 points (94.1% liked)
Technology
59243 readers
3422 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If I’m understanding this correctly, whatever chatgpt responds to your queries, you can be held liable for if any damaging content is produced.
It makes sense, right?
They produced a language model. It does nothing more than predict the next word. It will lie all the time, that's part of how it works. It makes stuff up from the input it gets.
If you post that stuff online and it contains lies about people and you didn't check it, you absolutely should be liable for that. I don't see a problem with that.
It's a tool. Can't sue the manufacturer if you injure someone with it.
This isn't true in the least. Purchase a tool and look through the manual. Every section marked "danger", "warning", or "caution" was put in there because someone sued some company because the user or some bystander was hurt or injured.
You are right. Seems I confused common sense with reality.