this post was submitted on 15 Aug 2023
148 points (78.9% liked)
Technology
59243 readers
3123 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Biased in what way?
They seem to be patching it whenever something comes up, which is still not an acceptable solution because things keep coming up. One great example that I witnessed myself (but has since been patched) was that if you asked it for a joke about men, it would come up with a joke that degraded men, but if you ask it for a joke about women, it would chastise you for being insensitive to protected groups.
Now, it just comes up with a random joke and assigns the genders of the characters in the joke accordingly, but there are certainly still numerous other biases that either haven't been patched or won't be patched because they fit OpenAI's worldview. I know it's impossible to create a fully unbiased... anything (highly recommend There is No Algorithm for Truth by Tom Scott if you have the interest and free time), but LLMs trained on our speech have learned our biases and can behave in appalling ways at times.
Worse, the majority of the data used by LLMs comes from the internet; a place that often brings out the worst and most polarized sides of us.
That's also very true. It's a big problem.