this post was submitted on 28 Aug 2023
1742 points (97.9% liked)
Lemmy.World Announcements
29056 readers
1 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Genuine question: won't they just move to spamming CSAM in other communities?
With how slow Lemmy moves anyways, it wouldn't be hard to make everything "mod approved" if it's a picture/video.
This, or blocking self hosting pictures
Honestly, this sounds like the best start until they develop better moderation tools.
This seems like the better approach. Let other sites who theoretically have image detection in place sort this out. We can just link to images hosted elsewhere
I generally use imgur anyway because I don't like loading my home instance with storage + bandwidth. Imgur is simply made for it.
Yes, and only whitelist trusted image hosting services (that is ones that have the resources to deal with any illegal material).
the problem is those sites can also misuse the same tools in a way that harms the privacy of it's users. We shouldn't resort to "hacks" to fix real problems, like using client scanning to break E2EE . One solution might be an open sourced and community maintained auto mod bot..
This seems like a really good solution for the time being.
[This comment has been deleted by an automated system]
Not-so-fun fact - the FBI has a hard limit on how long an individual agent can spend on CSAM related work. Any agent that does so is mandated to go to therapy afterwards.
It's not an easy task at all and does emotionally destroy you. There's a reason why you can find dozens of different tools to automate the detection and reporting.
[This comment has been deleted by an automated system]
Yep. I know someone that does related work for a living, and there are definite time limits and so on for exactly the reasons you say. This kind of stuff leaves a mark on normal people.
Or it could even just ask 50 random instance users to approve it. To escape this, >50% of accounts would have to be bots, which is unlikely.
But then people would have to see the horrible content first
That definitely is a downside