this post was submitted on 26 Dec 2023
177 points (96.3% liked)
Fediverse
28294 readers
464 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That doesn't work for vulnerable minorities. Manually filtering each shitty person after you step in their shit gets old. Coupled with the fact that not shutting down shitty people just means more shitty people are likely to turn up.
It's not sustainable
I think in this context it's meant on a technical level: as far as the fediverse is concerned, there's not a whole lot instances can do. Anyone can just spin up an instance and bypass blocks unless it works on an allowlist basis, which is kind of incompatible with the fediverse if we really want to achieve a reasonable amount of decentralization.
I agree that we shouldn't pretend it's safe for minorities: it's not. If you're a minority joining Mastodon or Lemmy or Mbin, you need to be aware that blocking people and instances has limitations. You can't make your profile entirely private like one would do on Twitter or any of Meta's products. It's all public.
You can hide the bad people from the users but you can't really hide the users from the bad people. You can't even stop people from replying to you on another instance. You can refuse to accept the message on the user's instance, but the other instance can still add comments that don't federate out. Which is kind of worse because it can lead to side discussions you have no way of seeing or participate in to defend yourself and they can be saying a lot of awful things.
Even those are not private.
It's the unfortunate reality. Social networks simply cannot in offer privacy. If they were upfront about it, then people could make rational decisions about what they share.
But instead they (including Mastodon) pretend like they can offer privacy, when they in fact cannot, resulting in people sharing things that they would not otherwise share.
It's not as black and white as you make it. The options aren't "perfect security" and "no security".
The option that most people that experience regular harassment want is "enough security to minimise the shit we have to deal with to a level that is manageable even if it's imperfect"
While you're theoretically right, we've seen in practice that nobody really offers even the imperfect privacy you describe, and on decentralized systems it only becomes harder to solve.
A Facebook style centralized network where you explicitly grant access to every single person who can see your content - is as close as we can get. But nobody is trying to make that kind of social network anymore, because there isn't much demand for it.
If you want a soapbox (Twitter/mastodon/bluesky, Reddit/Lemmy/kbin, Instagram/pixelfed, YouTube/toktok/peertube) then privacy is going to be a dream, especially if decentralized.
Vulnerable folk are looking for community, not a soap box. The goal is to connect with other folk whilst being as free as possible from harassment.
It's absolutely possible to achieve that without perfect privacy controls.
Privacy and being free of (in-context) harassment aren't the same thing. Your posts can all be public but your client can filter out any harassment, for example.
If the goal is privacy so that people who aren't in the community don't know that you're in the community, and don't know what the community is even talking about, I'm skeptical that it's practical. Especially for a decentralized network, I think that the sacrifices needed to make this happen would make the social network unappealing to users. For example, you'd need to make it invite only and restrict who can invite, or turn off any kind of discovery so that you can't find people who aren't already in your circle. At that point you might as well just use a group chat.
They're related. Often, the ability to limit your audience is about making it non trivial for harassers to access your content rather than impossible.
That's not the goal. The goal is to make a community that lets vulnerable folk communicate whilst keeping the harassment to a manageable level and making the sensitive content non trivial to access for random trolls and harassers.
It's not about stopping dedicated individuals, because they can't be stopped in this sort of environment for all the reasons you point out. It's about minimising harassment from the random drive by bigots
Hmmm I think I understand the intent. I'll have to think on it some more.
My gut tells me that protecting people from drive-by bigotry is antithetical to content/community discovery. And what is a social network without the ability to find new communities to join or new content to see?
Perhaps something like reddit where they can raise the bar for commenting/posting until you've built up karma within the community? That's not a privacy thing though.
What would this look like to you, and how does it relate to privacy? I've got my own biases that affect how I'm looking at the problem, so I'd be interested in getting another perspective.
You're thinking about this in an all or nothing way. A community in which everyone and everything they post is open to everyone isn't safe.
A community in which no one can find members or content unless they're already connected to that community stagnates and dies.
A community where some content and some people are public and where some content and some people are locked down is what we need, and though it's imperfect, things like authorised fetch brings us closer to that, and that's the niche that future security improvements on the Fediverse need to address.
No one is looking for perfect, at least not in this space.
I don't think I'm looking for perfect, I'm looking for "good enough" and while authorized fetch is better than nothing, it's nowhere near "good enough" to be calling anything "private".
I'm thinking that maybe we need to reevaluate or reconsider what it looks like to protect people from harassment, in the context of social media. Compare that to how we're currently using half-functional privacy tools to accomplish it.
I'm not saying existing features are good enough.
I'm saying that they're better than the alternative that started this conversation.
"Just loudly proclaim that everything is public but clients can filter out shit you don't wanna see"
That's what Twitter does right now. It's also a hate filled cesspit.
The Fediverse though, even though it has hate filled cesspits, gives us tools that put barriers between vulnerable groups and those spaces. The barriers are imperfect, they have booked holes and be climbed over by people who put the effort in, but they still block the worst if it.
Right, but what im saying is that the problem of privacy is different than the problem of harassment.
I'm not saying that we should give up on anti-harassment tools, just that I think that anti-harassment tools that are bolted onto privacy tools cannot work because those privacy tools will be hamstrung by necessity, and I think there must be better solutions.
Having people think that there is privacy on a social network causes harm, because people are change their behavior based on the unfulfilled expectation of privacy. I suspect there is a way to give up privacy and also solve the problem of harassment. That solution doesn't have to look like Twitter, but I have my own biases that may negatively affect how my ideas would work in practice.
I'm asking you
There's no such thing. They are mutually exclusive. Take queer folk for example. We need privacy to be able to talk about our experiences without outing ourselves to the world. It's especially important for queer kids, and folk that are still in the closet. If they don't have privacy, they can't be part of the community, because they open themselves to recognition and harassment in offline spaces.
With privacy, they can exist in those spaces. It won't stop a dedicated harasser, but it provides a barrier and stops casual outing.
An "open network" where everyone can see everything, puts the onus on the minority person. Drive by harassers exist in greater numbers than a vulnerable person can cope with, and when their content is a simple search and a throw away account away from abuse, it means the vulnerable person won't be there. Blocking them after the fact means nothing.
But isn't this already the case?
You make a good point about people still in the closet. That's an excellent use case for privacy. But I still believe that's a different issue. And I'm fact this is my great concern: people think they have privacy when they dont so they say things that out themselves (as any kind of minority) accidentally, because they mistakenly relied on the network privacy.
You're right though, it's not all-or-nothing, but I do think these are two separate problems that can and maybe should have different solutions.
The type of drive-by harassment you describe is by online randos, not in-person. For those situations, is it not enough that you remain oblivious to the attempted harassment? If a bigot harasses in a forest and nobody is around to hear it, did they really harass?
The problem is, there are plenty of other people around to hear it. Everyone else except the harassed person can see it, and on top of that, the fact that harassment is trivial to do, and not policed, ensures that more harassers will come along. Each one having to be blocked one by one by the people they're harassing, after the harassment has already taken place.
As I said earlier, this is how twitter does things, and there is a reason that vulnerable folk don't use twitter anymore.
No, it isn't, because right now, local only posting, follower only posting, authorised fetch, admin level instance blocks etc, all combine to make it non trivial for harassers. If you're familiar with the "swiss cheese defence model", that's basically what we have here. Every single one of those things can be worked around, especially by someone dedicated to harassing folk, but the casual trolls and bigots, they won't get through all of them. The more imperfect security, anti harassment and privacy options we have, the harder it is for casual bigots.
I'm familiar with the Swiss cheese model and you make a good point.
But even still, I think what we have now is insufficient, has other negative side effects too, and I don't see a good path to make it sufficient.
I was initially lamenting that social networks currently do a terrible job (dangerously negligent job) setting user expectations wrt privacy (or lack thereof). It'd be nice if social networks were upfront about the lack of privacy, and made the limitations of their tools inherently obvious. Sorry if it seems like I keep shifting goalposts, I keep changing the direction of the conversation as you give me interesting things to think about and discuss.
I'm not suggesting that we copy Twitter's model for anti-harassment, especially since The Idiot took it over.
I'm suggesting that, rather than just double down on what exists now, you do a thought experiment with me where we explore a radical rethink of anti-harassment, and what it might look like if we don't try to use privacy tools to accomplish it. I'm not convinced that there is no reasonable solution possible. Although the details would probably depend significantly on the type of social network (for example: microblogging vs reddit-like).
reasons why i love blahaj.zone 🥹
It's about the nature of the network. If it's just a little bubble where you only see and interact with your friends, it's probably doable. But nobody seems to want that anymore.
People want soapboxes like Twitter or Reddit or tiktok or YouTube. Privacy there is a lot more complicated and dubious.
In this case specifically, I think that the bad servers are spoofing as good servers. Which seems solvable (else cryptography signing things wouldn't work), but still.
It’s not sustainable to keep offering poorly designed solutions. People need to understand some basic things about the system they're using. The fediverse isn't a private space and fediverse developers shouldn't be advertising pseudo-private features as private or secure.
A private forum may be useful in that case.