this post was submitted on 30 Oct 2024
41 points (90.2% liked)

Asklemmy

44169 readers
1971 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Just chilling and sharing a stream of thought..

So how would a credibility system work and be implemented. What I envision is something similar to the up votes..

You have a credibility score, it starts a 0 neutral. You post something People don’t vote on if they like, the votes are for “good faith”

Good faith is You posted according to rules and started a discussion You argued in good faith and can separate with opposing opinions You clarified a topic for someone If someone has a polar opinion to yours and is being down voted because people don’t understand the system Etc.

It is tied to the user not the post

Good, bad, indifferent…?

Perfect the system

top 27 comments
sorted by: hot top controversial new old
[–] JohnDClay@sh.itjust.works 42 points 1 month ago

People will vote for what they like, not what's good faith.

[–] felsiq@lemmy.zip 28 points 1 month ago

I love the concept, but the ugly reality is that anyone can spin up an instance and pour in an arbitrary number of votes to themselves or anyone else. I think the credibility score would give people a false confidence and honestly do more harm than good unfortunately

your attempt at convincing people why to use a button will fail. they will do what they want. technical solutions for human behaviors can be difficult because humans do not generally like to be told what to do

mbin already has 'reputation' exposed

.

[–] RagnarokOnline@programming.dev 16 points 1 month ago* (last edited 1 month ago) (1 children)

There was a great DefCon talk recently about how a guy gained credibility on the dark web over the course of a few years and it was easy to do by just being helpful to others. People tend to trust those who are helpful.

After awhile, he got busted and the feds took over his ToR identity and used his credibility to bust some criminals on the dark web.

I recommend being suspicious of everyone you interact with online.

[–] bamfic@lemmy.world 2 points 1 month ago

Exactly the same way they do it IRL and have forevee. Bust someone trusted, make them wear a wire, bust someone higher up that way.

[–] grue@lemmy.world 12 points 1 month ago

I think we should take another look at Slashdot's moderation and meta-moderation system:

  • Users couldn't just vote on everything; "modpoints" (upvotes/downvotes, but also with a reason attached) were a limited resource.
  • Comments scores were bounded to [-1, 5] instead of being unbounded.
  • Most importantly, what wasn't limited was that users had the opportunity to "meta-moderate:" they would be shown a set of moderation actions and be asked to give a 👍 or 👎 based on whether they agreed with the modpoint usage or not.
  • Users would be awarded modpoints based on their karma (how their own comments had been modded by others) and their judgement (whether people agreed or not with their modpoint usage).

Admittedly the exact formula Slashdot used for awarding modpoints was secret to prevent people from gaming it, which doesn't exactly work for Lemmy, but the point is that I think the idea of using more than one kind of signal to determine reputation is a good one.

[–] ininewcrow@lemmy.ca 9 points 1 month ago

Most people (including myself) would like to agree with you on building some sort of system to create credibility or honesty or reliability among people on a social media platform. I think the majority of people that use any social media (including Lemmy) would probably agree and more than likely would participate in it.

Unfortunately, it only takes a small group of people to upset the system, game the system, play with the system or create situations or systems of their own to manipulate everything ... either to fight against others, or to generate some sort of power or control of their own. All it would take is this small group to completely change everything and make everything difficult and non functional.

It's a lot like the democratic system of government. When you think about it the majority of everyone would like to participate in it and make it work ... unfortunately, its only a small group of powerful individuals who have gamed the system to give themselves and their friends power over everyone else.

[–] Vanth@reddthat.com 8 points 1 month ago (1 children)

I didn't read your post, I just downvoted because I don't like your username. Whatcha' going to do about it?

(Jk, I picked the instance I joined based on the fact that it doesn't do downvotes. I think downvotes drive perverse incentives)

[–] toototabon@lemmy.ml 1 points 1 month ago (1 children)

( thanks! do you happen to know other instances that have downvotes disabled? up until know, i just knew of BeeHaw. Choosing between an upvote or engaging on conversation is more enticing when you can't just give a thumbs down and leave the room )

[–] Vanth@reddthat.com 1 points 1 month ago

I don't. And because it's an admin setting that can be toggled easily, any websearch you would do to find other people talking about instances that don't downvote should probably be double-checked with the instance itself. Even mine had a brief discussion about changing course and enabling downvotes.

There's a GitHub project to compare instances. I don't think it includes downvote setting, but maybe the other factors will at least help you narrow down. https://github.com/maltfield/awesome-lemmy-instances?tab=readme-ov-file

[–] BradleyUffner@lemmy.world 7 points 1 month ago

I award you 2 MeowMeowBeenz

[–] CarbonIceDragon@pawb.social 6 points 1 month ago

The issue is that people will use votes for if they like the thing or not instead of if it's in good faith, even if you tell them not to, both on purpose to harm opposing views, and unintentionally because they're more likely to notice a bad faith tactic coming from someone disagreeing than from someone agreeing with them.

[–] tetris11@lemmy.ml 5 points 1 month ago

I think mob rule as a moderation system is bad, and having a few power-users in charge is not the worst answer to that.

In my head: you'd have small web of trusts (I can vouch for you, you can vouch your friend, your friend can vouch for me, I must be somewhat trustworthy), and these webs would have some kind of voting power over flagged comments. Of course, that can be gamed...

[–] sylver_dragon@lemmy.world 5 points 1 month ago

While I would never support it, the main way to improve online discussion is by removing anonymity. Allow me to go back a couple decades and point to John Gabriel's Greater Internet Fuckwad Theory. People with a reasonable expectation of anonymity turn into complete assholes. The common solution to this is by linking accounts to a real identity in some way, such that online actions have negative consequences to the person taking them. Google famously tried this by forcing people to use their real name on accounts. And it was a privacy nightmare. Ultimately though, it's the only functional solution. If anti-social actions do not have negative social consequences, then there is no disincentive for people to not take those actions and people can just keep spinning up new accounts and taking those same anti-social actions. This can also be automated, resulting in the bot farms which troll and brigade online forums. On the privacy nightmare side of the coin, it means it's much easier to target people for legitimate, though unpopular, opinions. There are some "in the middle" options, which can make the cost to creating accounts somewhat higher and slower; but, which don't expose peoples' real identities in quite the same way. But, every system has it's pros and cons. And the linking of identities to accounts

Voting systems and the like will always be a kludge, which is easy to work around. Any attempt to predicate the voting on trusting users to "do the right thing" is doomed to fail. People suck, they will do what they want and ignore the rules when they feel they are justified in doing so. Or, some people will do it just to be dicks. At the same time, it also promotes herding and bubbles. If everyone in a community chooses to downvote puppies and upvote cats, eventually the puppy people will be drown out and forced to go off and found their own community which does the opposite. And those communities, both now stuck in a bias reinforcing echo chamber, will continue to drift further apart and possibly radicalize against each other. This isn't even limited to online discussions. People often choose their meat-space friends based on similar beliefs, which leads to people living in bubbles which may not be representative to a wider world.

Despite the limitations of the kludge, I do think voting systems are the best we're going to get. I'd agree with @grue that the Slashdot system had a lot of merit. Allowing the community to both vote on articles/comments and then later have those votes voted on by a random selection of users, seems like a reasonable way to try to enforce some of the "good faith" voting you're looking for. Though, even that will likely get gamed and lead to herding. It's also a lot more cumbersome and relies on the user community taking on a greater role in maintaining the community. But, as I have implied, I don't think there is a "good" solution, only a lot of "less bad" ones.

[–] hedgehog@ttrpg.network 4 points 1 month ago

Are you thinking of something like Stack Overflow’s reputation system? See https://stackoverflow.com/help/whats-reputation for a basic overview. See https://stackoverflow.com/help/privileges for some examples of privileges unlocked by hitting a particular reputation level.

That system is better optimized for reputation than the threaded discussions that we participate in here, but it has its own problems. However, we could at minimum learn from the things that it does right:

  • You need site (or community) staff, who are not constrained by reputation limits, to police the system
  • Upvoting is disabled until you have at least a little reputation
  • Downvoting is disabled until you have a decent amount of reputation and costs you reputation
  • Upvotes grant more reputation than downvotes take away
  • Voting fraud is a bannable offense and there are methods in place to detect it
  • The system is designed to discourage reuse of content
  • Not all activities can be upvoted or downvoted. For example, commenting on SO requires a minimum amount of reputation, but unless they’re reported as spam, offensive, fraudulent, etc. (which also requires a minimum reputation), they don’t impact your reputation, even if upvoted.

If you wanted to have upvoted and downvoted discourse, you could also allow people to comment on a given piece of discourse without their comment itself being part of the discourse. For example, someone might just want to say “I’m lost, can someone explain this to me?” “Nice hat,” “Where did you get that?” or something entirely off topic that they thought about in response to a topic.

You could also limit the total amount of reputation a person can bestow upon another person, and maybe increase that limit as their reputation increases. Alternatively or additionally, you could enable high rep users to grant more reputation with their upvotes (either every time or occasionally) or to transfer a portion of their rep to a user who made a comment they really liked. It makes sense that Joe Schmo endorsing me doesn’t mean much, but King Joe’s endorsement is a much bigger deal.

Reputation also makes sense to be topic specific. I could be an expert on software development but be completely misinformed about hedgehogs, but think that I’m an expert. If I have a high reputation from software development discussions, it would be misleading when I start telling someone about hedgehogs diets.

Yet another thing to consider, especially if you’re federating, is server-specific reputations with overlapping topics. Assuming you allow users to say “Don’t show this / any of my content to at all,” (e.g., if you know something is against the rules over there or is likely to be downvoted, but in your community it’s generally upvoted) there isn’t much reason to not allow a discussion to appear in two or more servers. Then users could accrue reputation on that topic from users of both servers. The staff, and later, high reputation users of one server could handle moderation of topics differently than the moderators of another, by design. This could solve disagreements about moderation style, voting etiquette, etc., by giving users alternatives to choose from.

[–] toototabon@lemmy.ml 4 points 1 month ago

Is this for an online community like Lemmy, or more oriented towards fixing the credit institutions?

in any case, a credibility metric would soon turn into a goal to achieve ^(karmafarming says what?)^

A metric ceases to be useful when it becomes a goal.

[–] Nemo@slrpnk.net 4 points 1 month ago

You know that the current voting system isn't like/dislike, right? Or it's not supposed to be. Your proposed system would have the same problem: users would use it as like / dislike buttons.

[–] AbouBenAdhem@lemmy.world 3 points 1 month ago

One issue specific to the Fediverse is that each instance and each community might have its own standard for what it considers “credible”—and part of another user’s credibility score might come from users on instances with which yours isn’t federated and doesn’t share information.

[–] Max_P@lemmy.max-p.me 3 points 1 month ago

It's just not that good of a metric overall. Not just because it would be easy to fake it, but also because it would inevitably divide into tribes that unconditionally upvote eachother. See: politics in western countries.

You can pile up a ton of reputation and still be an asshole and still get a ton of support from like-minded people.

The best measure of someone's reputation is a quick glance and their post history.

[–] Today@lemmy.world 3 points 1 month ago (1 children)

Why do we need to know how many up or down votes a user has? Assholes usually make themselves known pretty quickly.

ive found they help identify spammers pretty quickly

[–] znonymous@hexbear.net 3 points 1 month ago* (last edited 1 month ago)

I have an idea. Have every single article or comment posted by a user scanned by an LLM. Prompt the LLM to identify logical fallacies in the post or comment. Post the user logical fallacies counts on a public scoreboard hosted on each federated instance. Now, ban the top 10% scoring users each quarter who have a fallacy ratio surpassing some reasonable good faith objective.

Pros: Everyone is judged by the same impassive standard.

Cons: 1) A fucking LLM has to burn coal for every stupid post we make. 2) LLM prompt injection/hijacking vulnerability.

[–] j4k3@lemmy.world 2 points 1 month ago

You will likely find such a system is ineffective because popular is still only a very limited niche of the total audience. Most people do not vote or actively participate.

Demographics are way more complicated than they first appear. When I was a buyer for a bike shop, the numbers were surprising. Around 65% of my business was all entry level stuff even though all three shops were high end road race and XC. It is easy to believe one understands the audience, but in my experience I only really trust solid numbers and data.

That said, a reputation based system of social hierarchy exists already in academia.

You would need to assess the compromises involved too. Who is not going to post what because of this form of bias. I'm one of those people that will post lots of oddball stuff the piques my curiosity. I would like some engagement, but I don't care or focus on posting stuff that everyone will like. If some bias takes away all of my engagement for some popularity metric, I migrate somewhere else. I find most popular content humdrum and uninteresting.

[–] LovableSidekick@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

I think the practical result would be the same as any existing upvote/downvote system, because people don't objectively evaluate content for being well researched or thought out or expressed in good faith, they upvote what they like or agree with and downvote what they don't. They're going to do that no matter what you tell them to do.

[–] electric_nan@lemmy.ml 1 points 1 month ago

Just disregard 'votes' entirely. What exactly are you hoping to achieve? Do you want "low-credibility" users highlighted in red so you don't have to bother reading their comments? Have them hidden entirely? Seems like existing tools like blocking and banning already accomplish these goals.

[–] Siathes@lemmy.dbzer0.com 1 points 1 month ago

Thank you all for the discussion! I have read all the comments and enjoyed each response and will continue to do so. I came out with pretty much the same feelings as the rest of you…. In an ideal world…

Once again, thank you and good luck to everyone out there…we got this!

[–] lemonmelon@lemmy.world 1 points 1 month ago

You'd need to limit the capacity to vote on credibility to people who are members of the community. If you haven't joined, you can't make a judgment about what is or isn't a good faith post, but your own post can be voted by members. Rather than being attached to just the user, it would probably be better if it were referenced to the user per community. Even so, it's essentially karma, and could probably be gamed.

Otherwise, you've just reinvented upvotes.