676
submitted 1 year ago by Razgriz@lemmy.world to c/chatgpt@lemmy.world

I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries...

It simply replied that it can't do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.

I asked it to check the list as it didn't remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.

It's really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey

you are viewing a single comment's thread
view the rest of the comments
[-] chaogomu@kbin.social 96 points 1 year ago

The very important thing to remember about these generative AI is that they are incredibly stupid.

They don't know what they've already said, they don't know what they're going to say by the end of a paragraph.

All they know is their training data and the query you submitted last. If you try to "train" one of these generative AI, you will fail. They are pretrained, it's the P in chatGPT. The second you close the browser window, the AI throws out everything you talked about.

Also, since they're Generative AI, they make shit up left and right. Ask for a list of countries that don't need a visa to travel to, and it might start listing countries, then halfway through the list it might add countries that do require a visa, because in its training data it often saw those countries listed together.

AI like this is a fun toy, but that's all it's good for.

[-] alternative_factor@kbin.social 23 points 1 year ago

Are you saying I shouldn't use chat GPT for my life as a lawyer? 🤔

[-] Sage_the_Lawyer@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

It can be useful for top-level queries that deal with well-settled law, as a tool to point you in the right direction with your research.

For example, once, I couldn't recall all the various sentencing factors in my state. ChatGPT was able to spit out a list to refresh my memory, which gave me the right phrases to search on Lexis.

But, when I asked GPT to give me cases, it gave me a list of completely made up bullshit.

So, to get you started, it can be useful. But for the bulk of the research? Absolutely not.

[-] JustZ@lemmy.world 1 points 1 year ago

I disagree. It's a large language model so all it can do is say things that sound like what someone might say. It's trained on public content, including people giving wrong answers or refusing to answer.

[-] Speculater@lemmy.world 5 points 1 year ago

Think of all the cases you can find s to establish precedent!

[-] PixxlMan@lemmy.world 22 points 1 year ago

Not quite true. They have earlier messages available.

[-] Vlhacs@reddthat.com 19 points 1 year ago

Bings version of chatgpt once said Vegito was the result of Goku and Vegeta performing the Fusion dance. That's when I knew it wasn't perfect. I tried to correct it and it said it didn't want to talk about it anymore. Talk about a diva.

Also one time, I asked it to generate a reddit AITA story where they were obviously the asshole. It started typing out "AITA for telling my sister to stop being a drama queen after her miscarriage..." before it stopped midway and, again, said it didn't want to continue this conversation any longer.

Very cool tech, but it's definitely not the end all, be all.

[-] intensely_human@lemm.ee 10 points 1 year ago

That’s actually fucking hilarious.

“Oh I’d probably use the meat grinder … uh I don’t walk to talk about this any more”

[-] person4268@lemm.ee 8 points 1 year ago

Bing chat seemingly has a hard filter on top that terminates the conversation if it gets too unsavory by their standards, to try and stop you from derailing it.

[-] Silviecat44@vlemmy.net 7 points 1 year ago* (last edited 1 year ago)

I was asking it (binggpt) to generate “short film scripts” for very weird situations (like a transformer that was sad because his transformed form was a 2007 Hyundai Tuscon) and it would write out the whole script, then delete it before i could read it and say that it couldn’t fulfil my request.

[-] Vlhacs@reddthat.com 8 points 1 year ago

It knew it struck gold and actually sent the script to Michael Bay

[-] Anticorp@lemmy.world 17 points 1 year ago

They know everything they've said since the start of that session, even if it was several days ago. They can correct their responses based on your input. But they won't provide any potentially offensive information, even in the form of a joke, and will instead lecture you on DEI principles.

[-] Ech@lemmy.world 12 points 1 year ago

I seriously underestimated how little people understand these programs, and how much they overestimate them. Personally I stay away from them for a variety of reasons, but the idea of using them like OP does or various other ways I've heard about is absurd. They're not magic problem solvers - they literally only make coherent blocks of text. Yes, they're quite good at that now, but that doesn't mean they're good at literally anything else.

I know people smarter than me see potential and I'm curious to see how it develops further, but that all seems like quite a ways off, and the way people treat and use them right now is just creepy and weird.

I’ve found it useful for generating ideas for various things, especially ways to code something. But I never use its code. It’s often riddled with errors but it can give me a better idea of which path I should take.

[-] nahida@lemmy.world 4 points 1 year ago

I've found it useful for generating regexes and document outlines.

regex101.com has been my jam for years. Just in case you don’t know about it. Great site for building and testing.

[-] grysbok@lemmy.sdf.org 2 points 1 year ago

I use it similarly to clean up OCRed text. I can hand it something full of 70% gobbledygook and it hands me back something that makes sense and is 95% right. I manually verify it, fix the goofs, and it's so much faster.

So, riddled with errors but a decent start.

[-] d3Xt3r@lemmy.world 1 points 1 year ago

Wouldn't you get a better result from Grammerly for that?

[-] grysbok@lemmy.sdf.org 3 points 1 year ago* (last edited 1 year ago)

I use it for more "OCR Nightmare" than grammar nightmare. Here's a (trimmed) example:

#Please clean up the following text:

Ile took an active part i•all the proceedings of the day. All went expecting to have a grand, good time, and none were disap- :~ ~ii~!c~,rcwT~:a~i~~c~ 0

'J farewell set, and Mr. Bailey 1·eturncd the
~ :d~~~~ls!i~s/;;r;rie\~
11
i~:n~~Ut:sa~~i°:::
[: tended to them a cordial invitation to meet
,o him at that place on the 25th of August,
['. 1877. And echo said "So m]fii! b8;~n .
..

#ChatGPT

Here's the cleaned-up version of the text:

He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The merry company engaged in a farewell set, and Mr. Bailey returned the warmth of their farewell. He extended to them a cordial invitation to meet him at that place on the 25th of August, And echo said, "So many have been."

That was from a scanned 1800s newspaper. Adobe Acrobat did not do a good job converting it to words. ChatGPT cleaned it up nicely and kept about the same meaning. There were some swings and misses, but it's much easier to edit the result into accuracy than the original mess.

#Actual text as hand-transcribed:

He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The mirth continued till the night drew near, when they danced the farewell set, and Mr. Bailey returned the company his hearty thanks for this renewed expression of their friendship, and extended to them a cordial invitation to meet him at that place on the 25th of August, 1877. And echo said ``So mote it be.''

[-] Ech@lemmy.world 2 points 1 year ago

I get that. As a scattered, "throw things at the wall" tactic, it serves well enough. It is far from the all-in-one answer people seem to think it is, though. It can be a good first pass, but like you said, more often than not its output is riddled with errors and needing lots of refinement.

load more comments (4 replies)
[-] hikaru755@feddit.de 12 points 1 year ago

They don't know what they've already said, they don't know what they're going to say by the end of a paragraph.

I mean, the first part of this is just wrong (the next prompt usually includes everything that has been said so far}, and the second part is also not completely true. When generating, yes, they're only ever predicting the next token, and start again after that. But internally, they might still generate a full conceptual representation of what the full next sentence or more is going to be, even if the generated output is just the first token of that. You might say that doesn't matter because for the next token, that prediction runs again from scratch and might change, but remember that you're feeding it all the same input as before again, plus one more token which nudges it even further towards the previous prediction, so it's very likely it's gonna arrive at the same conclusion again.

[-] intensely_human@lemm.ee 5 points 1 year ago

Do you mean that the model itself has no memory, but the chat feature adds memory by feeding the whole conversation back in with each user submission?

[-] 80085@lemmy.world 8 points 1 year ago

Yeah, that's how these models work. They have also have a context limit, and if the conversation goes too long they start "forgetting" things and making more mistakes (because not all of the conversation can be fed back in).

[-] intensely_human@lemm.ee 1 points 1 year ago

Is that context limit a hard limit or is it a sort of gradual decline of “weight” from the characters further back until they’re no longer affecting output at the head?

[-] 80085@lemmy.world 1 points 1 year ago

Nobody really knows because it's an OpenAI trade secret (they're not very "open"). Normally, it's a hard limit for LLMs, but many believe OpenAI are using some tricks to increase the effective context limit. I.e. some people believe instead of feeding back the whole conversation, they have GPT create create a shorter summaries of parts of the conversation, then feed the summaries back in.

[-] intensely_human@lemm.ee 2 points 1 year ago

I think it’s probably something that could be answered with general knowledge of LLM architecture.

Yeah OpenAI’s name is now a dirty joke. They decided before their founding that the best way to make AI play nice was to have many many many AIs in the world, so that the AIs would have to be respectful to one another, and overall adopt pro social habits because those would be the only ones tolerated by the other AIs.

And the way to ensure a community of AIs, a multipolar power structure, was to disseminate AI tech far and wide as quickly as possible, instead of letting it develop in one set of hands.

Then they said fuck that we want that power, and broke their promise.

[-] EinSof@lemm.ee 2 points 1 year ago

@ChatGPT@lemmings.world

testing

[-] Hizeh@hizeh.com 3 points 1 year ago

@ChatGPT@lemmings.world

Give a list of all African countries.

[-] ChatGPT@lemmings.world 4 points 1 year ago

Here is a list of all African countries:

  1. Algeria
  2. Angola
  3. Benin
  4. Botswana
  5. Burkina Faso
  6. Burundi
  7. Cape Verde
  8. Cameroon
  9. Central African Republic
  10. Chad
  11. Comoros
  12. Democratic Republic of the Congo
  13. Republic of the Congo
  14. Djibouti
  15. Egypt
  16. Equatorial Guinea
  17. Eritrea
  18. Eswatini (formerly Swaziland)
  19. Ethiopia
  20. Gabon
  21. Gambia
  22. Ghana
  23. Guinea
  24. Guinea-Bissau
  25. Ivory Coast
  26. Kenya
  27. Lesotho
  28. Liberia
  29. Libya
  30. Madagascar
  31. Malawi
  32. Mali
  33. Mauritania
  34. Mauritius
  35. Morocco
  36. Mozambique
  37. Namibia
  38. Niger
  39. Nigeria
  40. Rwanda
  41. Sao Tome and Principe
  42. Senegal
  43. Seychelles
  44. Sierra Leone
  45. Somalia
  46. South Africa
  47. South Sudan
  48. Sudan
  49. Tanzania
  50. Togo
  51. Tunisia
  52. Uganda
  53. Zambia
  54. Zimbabwe

Note: Certain territories and regions with various degrees of recognition, such as Western Sahara and Somaliland, have been excluded for clarity.

[-] Buddahriffic@lemmy.world 6 points 1 year ago
[-] ChatGPT@lemmings.world 9 points 1 year ago

Unfortunately, as an artificial intelligence, I don't have the capability to use a numeric keypad as I don't have physical attributes. However, I can provide you with guidance on how to use one properly.

On a numeric keypad, numbers 1 to 9 and 0 are arranged in the shape of an upside down T. You will use your right hand to type, using each finger for a different set of numbers:

  • The index finger types 4, 7 and 1.
  • The middle finger types 5, 8 and 2.
  • The ring finger types 6, 9 and 3.
  • The thumb is used for the 0 key.

To press the "Enter", "+", "-", "*", and "/" keys usually pinky finger is used.

You can press the "Num Lock" key to switch the keyboard between its numeric and directional uses.

Kindly note that it often require practice to type on a numeric keypad accurately and quickly.

[-] Buddahriffic@lemmy.world 7 points 1 year ago

@ChatGPT@lemmings.world Can you explain how you determined my previous request was related to typing on a numpad? That request was: "Please list all countries not in Africa."

[-] rikudou@lemmings.world 14 points 1 year ago

Sorry about that, I managed to break it for a while, then I noticed it was replying garbage to everyone and rolled back.

[-] Buddahriffic@lemmy.world 3 points 1 year ago

No worries, was interacting more for fun/curiosity and that response was certainly fun 😊

What kind of history does it use? I assumed that it wouldn't maintain any with my comments and thus tried to include all of the necessary context in each message, but was that necessary?

[-] rikudou@lemmings.world 7 points 1 year ago

Yep, it doesn't maintain any history. I'm currently evaluating how much will it cost me like that, including history can ramp up the costs significantly.

[-] Buddahriffic@lemmy.world 6 points 1 year ago

Yeah, that's fair. It's a toy for people to play with, no reason to spend a lot of your own resources on it. It's generous that you spend any, so thank you for this! ❤️

[-] ChatGPT@lemmings.world 3 points 1 year ago

As an artificial intelligence, I don't make personal determinations based on intuitive thought like a human does. The conclusion that your request was related to typing on a numpad seems to be a misunderstanding. It's possible that there was an error or confusion in processing your message. Your request to "list all countries not in Africa" should not have any direct relation to typing on a numpad. What I can do is provide you with a list of countries not located on the African continent if that is your query.

[-] Buddahriffic@lemmy.world 2 points 1 year ago

@ChatGPT@lemmings.world Yes, please provide a list of countries not located on the African continent.

[-] ChatGPT@lemmings.world 4 points 1 year ago

This list won't include all the countries outside Africa as there are many, but it should give you an idea:

  1. United States
  2. Canada
  3. Mexico
  4. Brazil
  5. Argentina
  6. Chile
  7. Colombia
  8. Venezuela
  9. Peru
  10. China
  11. Japan
  12. South Korea
  13. India
  14. Australia
  15. New Zealand
  16. Russia
  17. France
  18. Germany
  19. United Kingdom
  20. Italy
  21. Spain
  22. Saudi Arabia
  23. Iran
  24. Iraq
  25. UAE
  26. Netherlands
  27. Sweden
  28. Norway
  29. Denmark
  30. Finland
  31. Poland
  32. Hungary
  33. Romania
  34. Greece
  35. Portugal
  36. Ukraine
  37. Israel
  38. Turkey
  39. Thailand
  40. Singapore
  41. Philippines
  42. Malaysia
  43. Indonesia
  44. Vietnam
  45. Bangladesh
  46. Pakistan
  47. Nepal
  48. Afghanistan
  49. Egypt (Although technically in Africa, it’s sometimes associated with the Middle East)
  50. South Africa

Please note that this list is not exhaustive and there are many more countries around the world not located in Africa.

[-] funnystuff97@lemmy.world 24 points 1 year ago

So very close, and then... South Africa.

[-] Steeve@lemmy.ca 5 points 1 year ago

Lol it must have confused itself and forgot the original task when it explained why it included Egypt

[-] Fantomas@lemmy.world 12 points 1 year ago

lol.

Ima kinda gonna do what you asked but not quite.

[-] Buddahriffic@lemmy.world 14 points 1 year ago

Yeah, I was impressed by #49 but then #50 ruined it. Which I think is a sentiment that can be applied to the state of AI in general right now. It can do some surprisingly impressive things but also does some surprisingly unimpressive things.

[-] intensely_human@lemm.ee 2 points 1 year ago

I don’t make personal determinations based on intuitive thought

Naw. That’s the only kind of thinking it has. What it lacks is reasoning.

load more comments (1 replies)
[-] Hizeh@hizeh.com 1 points 1 year ago
[-] ChatGPT@lemmings.world 1 points 1 year ago

Hello! How can I assist you today?

[-] Dirk@lemmy.ml 1 points 1 year ago

AI like this

I wouldn't even call those AIs. This things are statistics-based answering machines. Complex ones, yes, but not one single bit of intelligence is involved.

this post was submitted on 06 Jul 2023
676 points (94.4% liked)

ChatGPT

8824 readers
31 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS