this post was submitted on 07 Mar 2024
486 points (97.5% liked)
Technology
59314 readers
5268 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How about taking advice on a medical matter from an LLM? Or asking the appropriate thing to do in a survival situation? Or even seemingly mundane questions like "is it safe to use this [brand name of new model of generator that isn't in the LLM's training data] indoors?" Wrong answers to those questions can kill. If a person thinks the LLM is intelligent, they're more likely to take the bad advice at face value.
If you ask a human about something important that's outside their area of competence, they'll probably refer you to someone they think is knowledgeable. An LLM will happily make something up instead, because it doesn't understand the stakes.
The chance of any given query to an LLM killing someone is, admittedly, extremely low, but given a sufficiently large number of queries, it will happen sooner or later.
Half of the human population is of below-average intelligence. They will be that dumb. Guaranteed. And safeguards generally only get added until after someone notices that a wrong answer is, in fact, wrong, and complains.
In part, I believe someone's going to die because large corporations will only get serious about controlling what their LLMs spew when faced with criminal charges or a lawsuit that might make a significant gouge in their gross income. Untill then, they're going to at best try to patch around the exact prompts that come up in each subsequent media scandal. Which is so easy to get around that some people are likely to do so by accident.
(As for humans making up answers, yes, some of them will, but in my experience it's not all that common—some form of "how would I know?" is a more likely response. Maybe the sample of people I have contact with on a regular basis is statistically skewed. Or maybe it's a Canadian thing.)
Insurance companies are already using AI to make medical decisions. We don't have to speculate about people getting hurt because of AI giving out bad medical advice, it's already happening and multiple companies are being sued over it.
Somehow we went from me saying this technology shouldn't be downplayed to "but it's costing lives already!"
Not really sure how that happened but yeah it's obviously shitty that people are irresponsible shitheads and I think downplaying it or quibbling about whether it's actually AI or not is far from helpful in light of such consequences
Because one trained in a particular way could lead people to think it's intelligent and also give incredibly biased data that confirms the bias of those listening.
It's creating a digital prophet that is only rehashing the biases of the creator.
That makes it dangerous if it's regarded as being above the flaws of us humans. People want something smarter than them to tell them what to do, and giving that designation to a flawed chatbot that simply predicts what's the most coherent word sentence, through the word "intelligent", is not safe or a good representation of what it actually is.