this post was submitted on 03 Dec 2024
264 points (97.8% liked)
Technology
60071 readers
3652 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That’s literally all AI is designed to do. Given an input, it just tries to output an expected response.
Yeah, no. LLMs predict what comes next, not what someone wants to hear.
Try to ask if it likes you.
Didn't exactly make my heart throb but if it does that for you, you've got a low bar.
That… isn’t telling you what you want to hear.
LLMs are literally just complex autocorrect. They don’t weight their responses based on what a user wants to hear (unless explicitly instructed to) they simply return the most algorithmically generic response it can find.
Tell it to talk like a pirate, it will pattern match to pirate talk. It’s not doing it because you want it to, but because you gave it a “pre prompt” to talk like a pirate, and it did the most likely thing that would happen.
Yes, this can seem like telling you what you want, but go ask it to tell you what shape the world is. Then tell it you want the earth to be flat, and to answer the question again. Both times the answer will be an oblate spheroid, because it doesn’t know nor care what you want.
Now, if you say “Imagine the world is flat” first, yeah it’ll tell you it’s flat. Not because you want it to, but because you’re explicitly handing it “new information” that you want it to incorporate into its response.
Claude nails it again.
Not really wants as much as expects, but that’s what AI is designed to do.
What you're saying is not factual. LLMs predict what comes next based on the parameters set during learning process. It might at times say what you're expecting, but then try contradicting information that it knows to be factual. See how far that gets you.
I think you're confusing agreeableness for a validation buddy. For a product like this to work, it has to be inviting.
Now you’re just splitting hairs.