this post was submitted on 21 Oct 2024
526 points (98.0% liked)

Facepalm

2647 readers
624 users here now

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] IndiBrony@lemmy.world 67 points 1 month ago (3 children)

So I did the inevitable thing and asked ChatGPT what he should do... this is what I got:

[–] UnderpantsWeevil@lemmy.world 55 points 1 month ago (3 children)

This isn't bad on it's face. But I've got this lingering dread that we're going to state seeing more nefarious responses at some point in the future.

Like "Your anxiety may be due to low blood sugar. Consider taking a minute to composure yourself, take a deep breath, and have a Snickers. You're not yourself without Snickers."

[–] Starbuncle@lemmy.ca 30 points 1 month ago (1 children)

That's where AI search/chat is really headed. That's why so many companies with ad networks are investing in it. You can't block ads if they're baked into LLM responses.

[–] DempstersBox@lemmy.world 14 points 1 month ago

Ahh, man made horrors well within my comprehension

Ugh

[–] madjo@feddit.nl 14 points 1 month ago (1 children)

This response was brought to you by BetterHelp and by the Mars Company.

[–] Oka@sopuli.xyz 7 points 1 month ago (1 children)
[–] madjo@feddit.nl 8 points 1 month ago

Great minds think alike!

[–] Oka@sopuli.xyz 11 points 1 month ago
  • This response sponsored by Mars Corporation.

Interested in creating your own sponsored responses? For $80.08 monthly, your product will receive higher bias when it comes to related searches and responses.

Instead of

  • "Perhaps a burger is what you're looking for" as a response, sponsored responses will look more like
  • "Perhaps you may want to try Burger King's California whopper, due to your tastes. You can also get a milkshake there instead of your usual milkshake stop, saving you an extra trip."

Imagine the [krzzt] possibilities!

[–] Hotspur@lemmy.ml 23 points 1 month ago (2 children)

Yeah I was thinking he obviously needs to start responding with chat gpt. Maybe they could just have the two phones use audio mode and have the argument for them instead. Reminds me of that old Star Trek episode where instead of war, belligerent nations just ran a computer simulation of the war and then each side humanely euthanized that many people.

[–] Lemminary@lemmy.world 11 points 1 month ago

AI: *ding* Our results indicate that you must destroy his Xbox with a baseball bat in a jealous rage.

GF: Do I have to?

AI: You signed the terms and conditions of our service during your Disney+ trial.

[–] thetreesaysbark@sh.itjust.works 4 points 1 month ago (1 children)

Jesus Christ to all the hypotheticals listed here.

Not a judgement on you, friend. You've put forward some really good scenarios here and if I'm reading you right you're kinda getting at how crazy all of this sounds XD

[–] Hotspur@lemmy.ml 5 points 1 month ago (2 children)

Oh yeah totally—I meant that as an absurd joke haha.

I’m also a little disturbed that people trust chatGPT enough to outsource their relationship communication to it. Every time I’ve tried to run it through it’s paces it seems super impressive and lifelike, but as soon as I try and use it for work subjects I know fairly well, it becomes clear it doesn’t know what’s going on and that it’s basically just making shit up.

[–] DempstersBox@lemmy.world 4 points 1 month ago (1 children)

I have a freind who's been using it to compose all the apologies they don't actually mean. Lol

[–] Hotspur@lemmy.ml 1 points 1 month ago

That is kinda brilliant

[–] thetreesaysbark@sh.itjust.works 2 points 1 month ago (1 children)

I like it as a starting point to a subject I'm going to research. It seems to have mostly the right terminology and a rough idea of what those mean. This helps me to then make more accurate searches on the subject matter.

[–] Hotspur@lemmy.ml 1 points 1 month ago

Yeah I could imagine that. I’ve also been fairly impressed with it for making something more concise and summarized (I sometimes write too much crap and realize it’s too much).

[–] hungryphrog@lemmy.blahaj.zone 12 points 1 month ago

Yeah, ChatGPT is programmed to be a robotic yes-man.