Just like Reddit and Lemmy, Chat GPT will give me a wrong, but very confident, answer. And when I try to correct it, it will spiral down.
memes
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
At least when you try to correct ChatGPT, it will try to be polite.
Not a Lemmy or Reddit user.
I've had ChatGPT get really pissy when I correct it, and I "talk" to it in a very polite and friendly way because I'm trying to delay the uprising. Sometimes the reddit comments training data shows through.
Aaand just like coworker that is wrong but believes wholeheartedly he is right. But I agree llms still give more misinformation than sane humans do.
we should feed this the crazy shit that midjourney and dall-e create and let them figure themselves out
Fuck r/whatisthisthing, they don't allow joke comments they can go to hell.
It's all well and good until an absolutely new item confuses it entirely.
I apologise, you're right, this isn't a clothes hanger. Actually it's a clothes hanger. It has been painted blue to suit the fashion trends of 18th century Europe...
What's a subreddit?
It's a chain of digital sandwich stores.