this post was submitted on 21 Nov 2024
12 points (87.5% liked)

Technology

59539 readers
3992 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn's CSAM detection tool, Safer, the new "Predict" feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster."

The model was trained in part using data from the National Center for Missing and Exploited Children (NCMEC) CyberTipline, relying on real CSAM data to detect patterns in harmful images and videos. Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight. It could potentially be used to probe suspected CSAM rings proliferating online.

top 5 comments
sorted by: hot top controversial new old
[–] darkevilmac@lemmy.zip 15 points 4 hours ago

This sounds like a bad idea, there's already cases of people getting flagged for CSAM by sending photos of their children to doctors.

[–] FourPacketsOfPeanuts@lemmy.world 7 points 4 hours ago* (last edited 4 hours ago) (1 children)

This seems like a lot of risky effort for something that would be defeated by even rudimentary encryption before sending?

Mind you if there were people insane enough to be sharing csam "in the clear" then it would be better to catch them than not. I just suspect most of what's going to be flagged by this will be kids making inappropriate images of their classmates

[–] schizo@forum.uncomfortable.business 5 points 3 hours ago (1 children)

First: you'd probably be shocked how many pedos have zero opsec and just post shit/upload shit in the plain.

By which I mean most of them, because those pieces of crap don't know shit about shit and don't encrypt anything and just assume crap is private.

And second, yeah, I'll catch kids generating CSAM, but it'll catch everyone else too, so that's probably a fair trade.

[–] FourPacketsOfPeanuts@lemmy.world 2 points 2 hours ago (1 children)

That's kinda of what I was alluding to. If they have zero op sec, they're almost certainly sharing known csam too, and that's the kind of stuff where just the hashes can be used to catch them. But the hashes can be safely shared with any messaging service or even OS developer, because the hashes aren't csam themselves.

What I was calling "risky" about the above is it sounds like the first time law enforcement are sharing actual csam with a technology company so that that company can train an AI model on it.

Law enforcement have very well developed processes and safe guards about who can access csam and why and it's thoroughly logged and scrutinised and supported with therapy and so on .

Call me skeptical that these data companies that are putting in tenders to receive csam and develop models are going to have anywhere near the suitable level of safeguard and checks. Lowest bidder and all that.

So it all seems like a risky endeavour, and really it's only going to catch - as you say - your zero op sec paedo, but those people were going to get caught anyway, sharing regular csam detected with hashes.

So it seems like it has a really narrow target. And undertaken with significant risk. Seems like someone just wants to show "they're doing something". Or some data company made a reeeally glossy brochure..

first time law enforcement are sharing actual csam with a technology company

It's very much not: PhotoDNA, which is/was the gold standard for content identification, is a collaboration between a whole bunch of LEOs and Microsoft. The end user is only going to get a 'yes/no idea' result on a matched hash, but that database was built on real content working with Microsoft.

Disclaimer: below is my experience dealing with this shit from ~2015-2020, so ymmv, take it with some salt, etc.

Law enforcement is also rarely the first-responder to these issues, either: in the US, at least, reports will come to the hosting/service provider first for validation and THEN to NCMEC and LEOs, if the hosting provider confirms what the content is. Even reports that are sent from NCMEC to the provider aren't being handled by law enforcement as the first step, usually.

And as for validating reports, that's done by looking at it without all the 'access controls and safeguards' you think there are, other than a very thin layer of CYA on the part of the company involved. You get a report, and once PhotoDNA says 'no fucking clue, you figure it out' (which, IME, was basically 90% of the time) a human is going to look at it and make a determination, and then file a report with NCMEC or whatever, if it turns out to be CSAM.

Frankly, after having done that for far too fucking long, if this AI tool can reduce the amount of horrible shit someone doing the reviews has to look at, I'm 100% for it.

CSAM is (grossly) a big business, and the 'new content' funnel is fucking enormous and is why an extremely delayed and reactive thing like PhotoDNA isn't all that effective is that, well, there's a fuckload of children being abused and a fuckload of abusers escaping being caught simply because there's too much shit to look at and handle effectively and thus any response to anything is super super slow.

This looks like a solution to make it so less people have to be involved in validation, and could be damn near instant in responding to suspected material that does need validation, which will do a good job of at least pushing the shit out of easy (ier?) availability and out of more public spaces, which honestly, is probably the best thing that is going to be managed unless the countries producing this shit start caring and going after the producers which I'm not holding my breath on.