this post was submitted on 29 Nov 2023
434 points (97.4% liked)

Technology

59314 readers
4948 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

you are viewing a single comment's thread
view the rest of the comments
[–] Immersive_Matthew@sh.itjust.works 15 points 11 months ago (2 children)

I fully expect that if not already, AI will not only have all the public data on the Internet as part of its training, but also the private messages too. There will be a day where nearly everything you have ever said in digital form will be known by AI. It will know you better than anyone. Let that sink in.

[–] Capricorn_Geriatric@lemm.ee 11 points 11 months ago (1 children)

But if it knows everything, it knows nothing. You cannot discern a lie from the truth. It'll spit something out and it may seem true, but is it really?

[–] Immersive_Matthew@sh.itjust.works 2 points 11 months ago (2 children)

What do you mean if it knows everything it knows nothing? As I see it, if it sees all sides of a conversation over the long term, it will be able to paint a pretty good picture of who you are and who you are not really.

[–] CileTheSane@lemmy.ca 4 points 11 months ago (1 children)

Your friend tells you about his new job:
He sits at a computer and a bunch of nonsense symbols are shown on the screen. He has to guess which symbol comes next. At first he was really bad at it, but over time he started noticing patterns; the symbol that looks like 2 x's connected together is usually followed by the symbol that looks like a staff.
Once he started guessing accurately on a regular basis they started having him guess more symbols that follow. Now he's got the hang of it and they no longer tell him if he's right or not. He has no idea why, it's just the job they have him.
He shows you his work one day and you tell him those symbols are Chinese. He looks at you like you're an idiot and says "nah man, it's just nonsense. It does follow a pattern though: this one is next."

That is what LLM are doing.

[–] Immersive_Matthew@sh.itjust.works 1 points 11 months ago

I would disagree that AI knows nothing. I use ChatGPT plus near daily to code and it went from a hallucinating mess to what feels like a pretty competent and surprisingly insightful service in the months I have been using it. With the rumblings of Q* it only looks like it is getting better. AI knows a lot and very much seems to understand, albeit far from perfect but it surprises me all the time. It is almost like a child who is beyond their years in reading and writing but does not yet have enough life experience to really understand what it is reading and writing…yet.

[–] JohnEdwa@sopuli.xyz 4 points 11 months ago* (last edited 11 months ago) (1 children)

Because language learning models don't actually understand what is truth or what is real, they just know how humans usually string words together so they can conjure plausible readable text. If your training data contains falsehoods, it will learn to write them.

To get something that would benefit from knowing both sides, we'd need to first create a proper agi, artificial general intelligence, with the ability to actually think.

[–] Immersive_Matthew@sh.itjust.works 1 points 11 months ago

I sort of agree. They do have some level of right and wrong already, it is just very spotty and inconsistent in the current models. As you said we need AGI level AI to really address the shortcomings which sounds like it is just a matter of time. Maybe sooner than we are all expecting.

[–] freeman@sh.itjust.works 3 points 11 months ago (1 children)

Only if your private messages are not e2e.

[–] shea@lemmy.blahaj.zone 3 points 11 months ago (2 children)

it'll get broken one day

for now its being stored

[–] freeman@sh.itjust.works 7 points 11 months ago (1 children)

Sure they will store everything till it's cost effective to crack the encryption, on everything some randoms send each other.

Intelligence will do that for high profile targets, possibly unsuccessfully.

[–] shea@lemmy.blahaj.zone 0 points 11 months ago (1 children)

Nah i bet you they'll be able to crack everything easily enough one day. And they can use an llm to process the information for sentiment and pick out any discourse they deem problematic, without having to manually go through all that data. We're already at the point where the only guaranteed safe information storage is in your mind or on an airgapped physical media

[–] freeman@sh.itjust.works 1 points 11 months ago

'Bet' all you want, you are still wrong.

Sorting vast amounts of data is already an issue for intel agencies that theoretically llms could solve. However decrypting is magnitudes harder and more expensive. You can't use llms to decide which data to keep for decrypting since.. you don't have language data for the llms to process. You will have to use tools working on metadata (sender and receiver, method used etc).

There's also no reason for intelligence services to train AI on your decrypted messages, it won't help them decrypt other messages faster, in fact it will take away resources from decryption.

[–] Kolrami@lemmy.world 5 points 11 months ago

Before you get downvoted, here's a wiki page backing you up.

https://en.m.wikipedia.org/wiki/Harvest_now,_decrypt_later