this post was submitted on 21 Nov 2024
21 points (86.2% liked)

Technology

59539 readers
3201 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn's CSAM detection tool, Safer, the new "Predict" feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster."

The model was trained in part using data from the National Center for Missing and Exploited Children (NCMEC) CyberTipline, relying on real CSAM data to detect patterns in harmful images and videos. Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight. It could potentially be used to probe suspected CSAM rings proliferating online.

you are viewing a single comment's thread
view the rest of the comments
[–] schizo@forum.uncomfortable.business 1 points 6 hours ago (2 children)

AI model of that type is safe to deploy anywhere

Yeah, I think you've made a mistake in thinking that this is going to be usable as generative AI.

I'd bet $5 this is just a fancy machine learning algorithm that takes a submitted image, does machine learning nonsense with it, and returns a 'there is a high probability this is an illicit image of a child', and not something you could use to actually generate CSAM with.

You want something that's capable of assessing the similarities between a submitted image and a group of known bad images, but that doesn't mean the dataset is in any way usable for anything other than that one specific task - AI/ML in use cases like this is super broad and has been a thing for decades before the whole 'AI == generative AI' thing became what everyone is thinking.

But, in any case: the PhotoDNA database is in one place and access to it is scaled by the merit of uh, lots of money?

And of course, any 'unscrupulous engineer' that may have any plans for doing anything with this is probably not a complete idiot, even if a pedo: they're going to have shockingly good access controls and logging and well, if you're in the US, if the dude takes this database and generates a couple of CSAM images using it, the penalty is, for most people, spending the rest of their life in prison.

Feds don't fuck around with creation or distribution charges.

[–] barsoap@lemm.ee 1 points 6 hours ago

Yeah, I think you’ve made a mistake in thinking that this is going to be usable as generative AI.

Possibly not on its own but that's not really the issue: Once you have a classifier you can use its judgements to train a generator. PhotoDNA faces the same issue that's the reason why it's not available to the general public.