Trust-based Moderation Systems

Alexander has written a thesis about Trust-based moderation. Check this Twitter thread for a summary and a link to the document:

https://twitter.com/cblgh/status/1277655840909115397?s=20

Here’s the intro:

Trust-based Moderation Systems

How do you remove malicious participants from a chat? For a set of participants, what are the steps needed such that the malicious participant is no longer visible by anyone in the set?

In the centralized context, removing a malicious participant is the action of a moderator. Usually it is one or two clicks, and the malicious participant has been removed for all other participants.

In a distributed context, there are many possible answers to this problem. The first and naive solution is to delegate the responsibility of removing the malicious participant to each individual participant. Thus everyone participating has to individually hide offenders. Viewed as an isolated case it works, but repeated instances will risk causing an outsize burden on the participants.

Traditional moderation systems grant a special privilege to the initiator of the chat group. This can become problematic for many reasons. The initiator may disappear, leaving the group moderation-less. Similarly, the initiator may be adept at starting new contexts, but lacking in skill concerning matters of moderation (e.g. assigning new moderators). Finally, issues may arise where previously good moderators have a falling out and start banning people, as demonstrated in increasing frequency by large community-driven Mastodon instances.

A subjective system, where participants can themselves decide who moderates on their behalf, sidesteps the mentioned problems. Additionally, the mechanism of freely allowing multiple people to moderate also spreads out the invisible care-giving labour required to keep a community free from abuse.

This work explores an approach to implementing a subjective, trust-based system.

What if participants could automatically block the malicious peer, if they discover that the peer has been blocked by someone the participant trusts? This is similar to the administrator from the centralized context, but more flexible. In the centralized context, if the administrator is misbehaving and a participant loses trust in them, their only options are to live with it, or to leave the group. In the system where you effectively choose who can moderate for you, you can also choose to revert that decision if your trust later proves to have been misplaced.

How this can be achieved will be explored in the rest of this post in the form of a new system for managing and interacting with trust, TrustNet .

Source

Code

6 Likes

Very cool idea! But does it really solve this problem fully:

So every time I explore some new territory I have to figure out who the trusted elder is of that place? And while I am doing that I am exposed to all the dirty content. Or maybe I just hurry up and pick someone? Well then I might not have considered some content I like gets censored by them and miss out on it. Maybe I just do what the majority does? Well then it is basically a centralized system again.

3 Likes