this post was submitted on 03 Dec 2024
264 points (97.8% liked)

Technology

60071 readers
3854 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Software engineer Vishnu Mohandas decided he would quit Google in more ways than one when he learned that the tech giant had briefly helped the US military develop AI to study drone footage. In 2020 he left his job working on Google Assistant and also stopped backing up all of his images to Google Photos. He feared that his content could be used to train AI systems, even if they weren’t specifically ones tied to the Pentagon project. “I don't control any of the future outcomes that this will enable,” Mohandas thought. “So now, shouldn't I be more responsible?”

The site (TheySeeYourPhotos) returns what Google Vision is able to decern from photos. You can test with any image you want or there are some sample images available.

you are viewing a single comment's thread
view the rest of the comments
[–] EncryptKeeper@lemmy.world 7 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

That’s literally all AI is designed to do. Given an input, it just tries to output an expected response.

[–] synnny@lemmynsfw.com 1 points 2 weeks ago (7 children)

Yeah, no. LLMs predict what comes next, not what someone wants to hear.

[–] EncryptKeeper@lemmy.world 1 points 2 weeks ago (2 children)

Not really wants as much as expects, but that’s what AI is designed to do.

[–] synnny@lemmynsfw.com 3 points 2 weeks ago (1 children)

What you're saying is not factual. LLMs predict what comes next based on the parameters set during learning process. It might at times say what you're expecting, but then try contradicting information that it knows to be factual. See how far that gets you.

I think you're confusing agreeableness for a validation buddy. For a product like this to work, it has to be inviting.

[–] EncryptKeeper@lemmy.world 0 points 2 weeks ago

LLMs predict what comes next based on the parameters set during learning process.

Now you’re just splitting hairs.

load more comments (4 replies)
load more comments (4 replies)