this post was submitted on 28 Jun 2024
57 points (95.2% liked)
Asklemmy
43959 readers
1794 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Don't ask medical questions of chat gpt. You may as well shake a magic 8 ball. Your legs are probably fine. Bodies just do weird stuff sometimes
I know my legs are fine. All I want to know is a name for this sensation and what causes it. Yes, I want to know about the weird stuff the body does, why is it wrong to ask chatGPT or google?
LLMs are stochastic parrots. They just repeat the phrases most often used together in their training data in association with the words on your prompt. It's like seeking medical advice from the predictive text on your phone keyboard.
Why is this question considered medical advice? Also, considering most common facts are parroted correctly out of LLMs, why is it wrong to search for answers there first?
Yeah it's a very reasonable question for an llm, especially if you Google the name for it and read a reputable article after
OK fair, I guess if your not planning to act on it anyway then the stakes are pretty low. I don't agree that llms reliably get basic information correct. "Glue is not pizza sauce" seems like a common fact to me but Googles llm disagrees for example.
That wasn’t something an LLM came up with, though. That was done by a system that uses an LLM. My guess is the system retrieves a small set of results and then just uses the LLM to phrase a response to the user’s query by referencing the links in question.
It’d be like saying to someone “rephrase the relevant parts of this document to answer the user’s question” but the only relevant part is a joke. There’s not much else you can do there.