Decentralised fact-checking dapp - Sentinel


I would like to propose a de-centralised fact checking platform built on MaidSafe. I would call it Sentinel. " Quis custodiet ipsos custodes?" - or Who guards the Guards. I’m fed up with so-called fact checking websites that are themselves captured and are funded by those with an agenda. Sentinel would be a bit like Gab, which allows commenting on any content on the web, but instead fact-checking any content on the web.

The idea would be to enable fact-checking users to snapshot any text content on the internet into Maid storage and then markup statements as opinion, logical fallacies (List of fallacies - Wikipedia), or backed with evidence. Users would also be able then attach links to supporting/contradicting evidence, the evidence type and quality (eg. anecdote, randomised controlled trial, meta-analysis of multiple RCTs).

Fact-checking consumers would be to be able to run a plug-in over the top of their browser to auto-highlight text in different colours based upon the fact-checking data - opinion and evidence backed statements, and logical fallacy type arguments. Evidence based statements would be expandable so that users could explore the evidence stack on both sides of the argument for themselves.

The snapshot could then look at the truth of statements made at the time of publication and also currently in light of newer evidence that was unavailable when initially published.

The safe network would be a really great backing layer to make this indelible and hard to shutdown.

Please share your thoughts, comments and suggestions for this idea. Would you like to see it built? What would we need to do to create an mvp?



Unlikely you would be able to achieve the type of discussion you’re hoping for. I’ve seen messageboards which try to do similar things, and nothing really changes. Nobody changes their mind, people get frustrated, and it devolves from there.

And typically it becomes one-sided. Whichever side latches on the most winds up driving the other side away, then pounce when a random newbie shows up, overwhelming them.

Most people don’t want to hear both sides, anyway. They want to find things which support their preconceived ideas. It’s a difficult thing for most people to allow for the possibility they might be wrong. Particularly if the topic is important to them.

Not saying it CAN’T be done. Would just be extremely difficult, imo.


Yes, individual biases would probably short-circuit this kind of much-needed service . . . unless it was AI based . . . maybe.

1 Like

And what biases would an AI impose?

The problem with “fact checkers” as they manifest today, is that people think that there is such a thing as being unbiased. Truth is ALWAYS contextual and we all bring our won view of the world, our context, to the table, including what we are trying to do within that context.

Seeking absolute truth is worth doing, but that is a religious/spiritual/philosophical pursuit, and can’t be enforced per se.

The reason that something like Sentinel, proposed here, is workable is that individuals are somewhat forced to realize that it is their choice as to what and who to believe. On SAFE, Sentinel would leave all avenues open to one’s own discernment, and obviously so.

The promise of the internet was never that it contained only truth. It’s that, with unrestricted communication, truth is there to be determined, to the degree one chooses to seek and be responsible for on what or whom they bestow their belief.

The current internet takes historic steps in that direction, though–as we can agree–centralized control is still too easy for the “powerful”. SAFE will boost the experience by orders of magnitude, and enable for something like Sentinel to be very useful–for those who can be responsible for the fact that, ultimately, we each choose what to believe, right or wrong. That’s the next step in evolution, by my lights.


Ideally, the aim would not be for absolute truth, but something like:

  1. The probability of so-and-so being primarily true is --%
  2. The probability of so-and-so being partially true is --%

That might be the best we can accomplish and much more suitable for AI (with its own implicit biases) than actual humans doing the real-time estimating.

1 Like

I love the idea. I have daydreamed about it before, and sketched out some rough ideas, I think it is definitely worth a shot. When I’ve brought it up it’s been met with (well-meaning) philosophy-101 type responses about truth; which in itself could be suggesting something. People do have strong feelings about it, at least.

Two points: firstly, the idea being both easily reproducible, and heavily community-based, is almost more important than the implementation details themselves.

There is plenty of interesting variations here you can try in the implementation, it’ll depend on what sort of stuff we can do in the end with linked data I reckon. The permanent aspect of public data should already give lots of new possibilities. There’s hopefully micro-tipping, and lots of other stuff.

Given the difficulty of the problem though, it’s essential that plenty of variations on the idea are encouraged, and that it captures the imagination of the Safe Netizens. I wouldn’t be sold on A.I. playing a major role unless it were absolutely necessary for this reason… Or at least, anything to the detriment of this community-involvement aspect should be cautiously adopted. There’s no point in having A.I. truths that no-one reads, surely.

Second simple enough idea would be to start small and solid, with only very, very high standards of ‘evidence’ accepted, and maybe a vetting/voting/review process of some kind. Clever methods for utilising the power of the community could be employed here. Again, it’s been said, but getting these implementation details right will be very, very hard, avoiding spam, these are hard problems I think.

So in summary, yes, should be tried. It needs community, maybe some micro-payments to grease the wheel, maybe a very small area of focus to begin with to avoid grey(er) areas, and ideally would be simple and copyable.


I love the concept, and while I think it’ll be tricky to get working well, it has to be worth a go.

In some cases, just making it easy for people to find out whether there’s decent research / evidence available to back up a statement will help people figure out whether it’s sound, or is just someone with a strong view overplaying their hand.

I expect there would be different ways of doing it, and different methods may work better for different kinds of truth claim, e.g. scientific vs historical vs logical etc.

I’d also love it if a topic of debate could be expanded and visualised with key positions shown across a spectrum, with the best arguments for each position up voted by proponents so the most commonly held views around a debate can easily be found, compared, and understood.

I look forward to seeing people take a stab at this kind of thing despite the challenges. The Safe network would imdeed be a great place to host it.