An additional layer of security for mutable data

A couple days ago, I asked in the SAFE forum about what happens if a section is taken over, and the responses mainly indicated that this would be a disaster for all data in the section. However, it doesn’t need to be so.

I propose a layer that you can add on top of SAFE network to make it even SAFER :slight_smile: I wrote in the subject that this is for crypto-currency, but it can really apply to any mutable data.

First, a few definitions:

  1. A Merkle DAG (Directed Acyclic Graph) is exactly as it sounds. Used in Git and other places to verify integrity of revisions, with data pointing backwards in time. Note that the existence of a hash only proves that someone generated the previous hashes earlier. You can verify membership in a DAG by simply having a Merkle Branch (path in the Merkle DAG from the root, including all immediate parents).

  2. A Signed Merkle DAG would have among the parents a cryptographic signature of the other parents (i.e. signed data) using public key cryptograpy. This is proof (vulnerable to quantum computer attacks in the future) that a given participant signed the membership in the Merkle DAG, presumably after verifying some rules.

  3. A “Merkle Head” is a hash in the DAG that doesn’t have any immediate children.

  4. A “Merkle Stream” is a Merkle DAG which has only one “Head”. (This is the terminology we use at Qbix. In effect, it is the dual of a Tree)

Now, every Merkle Stream has a history going all the way to its first state. Let’s represent coins as Merkle Streams, and let’s put responsibility on the party holding a coin to store the entire history of the coin, up until it was transferred to them.

  1. “Notaries” are participants in the network which approve transactions.

Third parties – in your case, Sections in the SAFE Network running a consensus algorithm – approve transactions transferring a coin from A to B. More generally, this could be any type of stream, and approval would consist of validating certain rules for that type of stream (such as moves in a chess game).

Once the consensus process completes and a transaction is approved, it is cryptographically signed as in 2, and now the Section contains the latest information. The Section should store at least a little bit of history for this Merkle Stream, so that others can update their copy when they come online. Who are these others?

  1. “Watchers” are nodes which watch transactions approved and signed by the Notaries, which are appended to a given Stream. They likewise have the stream type and rules, and make sure that the transaction didn’t violate any rules.

In the case of a crypto-currency, one of the rules is that it can’t be double-spent, so the same Section MUST NOT suddenly switch to publishing a different fork than before. In the case of more complex Stream types, there may be more rules than this.

Watchers can be chosen by “close distance” from the transaction’s id. To make this group of watchers unpredictable by any one party in advance (so they can’t all be bribed), the transaction can contain random nonces contributed by the various parties in the transaction.

  1. A “Claim of Violation” is a claim made by a Watcher that they have cryptographic proof of a certain violation by a Notary. Perhaps a transaction was approved by a Notary but the Watcher claims it violated a rule. Or, perhaps the Notary suddenly switched to publishing another fork of the Merkle Stream, e.g. after payment was accepted.

Watchers are supposed to gossip Claim of Violation, verify it and store it (as long as they are honest). Ultimately, the Recipient of a Coin (or anyone loading the latest head of a Merkle Stream) is responsible for waiting and checking if any watchers report a Claim of Violation about this transaction. The costs of this should be covered by the Recipient of the Coin.

The Recipient checks the Claim of Violation against the Signed Merkle Tree cryptographically signed by the Section, and may agree that this Claim is legitimate. They then submit their version of the history to the Network, which may choose to have another Section become the custodian of the data. It may be that several Recipients will have competing histories, that diverge at some point.

Meanwhile, the “Watchers” in the Network are responsible for gossiping the Claims of Violation, and help reassign all the data from the provably compromised Section to a new Section.

All participants in the network who come across these Claims are supposed to check them, and if the claims are true, they start re-routing all requests to the new Section, not the compromised one.

  1. A “Permissionless Timestamp Network” is a network where participants form a binary Merkle Tree (at most two parents per child) by timestamping their neighbors in order, if their head hashes have changed, and propagating hash chains to prove timestamps relative to a given stream. This allows anyone in order to determine which events happened first, relative to a given Stream/Watcher.

  2. Given several Merkle DAGs that share a common history, they can be sorted relative to a Watcher as X < Y whenever X diverges from Y at a transaction X1 vs Y1, and X1 happened before Y1 relative to that Watcher. Thus we can find the “Earliest History” relative to a Watcher, of a set of Merkle DAGs sharing a common history.

Recipients still holding a Coin are responsible to listen for Claims of Violation by Watchers, and verify these claims. False claims can be submitted to the network to penalize the Watcher, and downgrade its reputation. Claims of Violation that verify as True are submitted to the new Section along with the Recipient’s history of the coin.

The earliest history is stored by the new Section.

The old Section is either decommissioned or deranked by the Network.

1 Like

Really, if people hold their own files with histories, the source of truth is the collective holding of these people. The network is basically holding a “cache” of this information. If even one party can prove that the network has been corrupted, then a new section should be set up. And at that point, all the end-users holding their histories submit them to the new section, and the new section decides which history is the one to take, depending on the type of thing (in this case, a coin, probably the first spending should take precedence). Of course, it can be done by some other rule, but the point is that we don’t rely on voting for consensus, but only proofs. Like in mathematics.

I can probably code a proof of concept with single computers instead of sections, to illustrate the above. But it would take weeks.

Did you read the Datachain RFC? Because many of your ideas are already defined in their design.

2 Likes

I didn’t read all of it, yet. The Watchers are different than sections in that they are:

  • Not operating by consensus
  • Do not store the actual history, just the head hash and maybe tail hash
  • Gossip and store violations
  • Are able to cause the network to reroute stuff to new sections

If you do already have this, then they must be consulted every time someone needs to check integrity of the data. It’s an optional addition to sections, but it helps verify the data and initiate a process to re-supply the entire history of a mutable file to another section.

1 Like

You know your sh*t !!!

1 Like

So now we need another layer to make sure the watchers are honest and verify them. “Who watches the watchers” recursive problem

Often the recipient will be an APP or person who is not online at the time of the transfer and may not be for days.

If the watcher calls foul then what happens. The recipient makes a “claim” and if the network agrees then transactions is reassigned? This reminds me of the ETH rewrite of history. What if the claim is false based on a bad watcher, then the coin or data is reassigned.

This would then mean we need watchers to watch the watchers and then who watches them.

Rewriting history is a bad thing and whenever we can get away form that then we can have real security.

Also what is to stop a data thief from replacing the whole DAG with a new set of sigs and transactions. We need the data to be secured in the first place before a new layer. EDIT: So then the new layer is just a security blanket like a child has, it does nothing useful but comfort the user.

The datachains is an implementation that is closely tied into the operations of the section and requires no watches and no history rewrites nor the recipient being involved and essentially provides the same functionality at the core level and not a layer level.

For my overview, and sorry its negative since you put a lot of good work into this, is that its trying to merge other systems into a system that is already handling this.

“Complexity is the enemy of security” Quote of cryptographers

4 Likes

Yep that’s why I called them Watchers, because I like to ask “Who watches the watchers?” The answer is: each other. This layer is supposed to only help, not hurt. If it hurts somehow, then that’s not good.

Also what is to stop a data thief from replacing the whole DAG with a new set of sigs and transactions.

The close consensus can replace the whole DAG with a new DAG, but the watchers will catch them. What’s to stop anyone else from doing it is all the members of close consensus signing every one of its results. So the only way someone could obtain a fake signed consensus result is either compromising every one of the close consensus (but then the watchers would flag this) or using quantum computers to fake all those signatures (not feasible, and when quantum computers appear there will be new types of PKI signatures, which are currently being vetted by NIST already).

We need the data to be secured in the first place before a new layer. EDIT: So then the new layer is just a security blanket like a child has, it does nothing useful but comfort the user.

If the watcher calls foul then what happens. The recipient makes a “claim” and if the network agrees then transactions is reassigned? This reminds me of the ETH rewrite of history. What if the claim is false based on a bad watcher, then the coin or data is reassigned.

The watchers are an additional layer, which doesn’t store the actual data – only the head hashes – and doesn’t operate by consensus. They act strictly as a way to detect and report shenanigans, and supply cryptographic proof that it happened. There is no way they can prove that a close-consensus misbehaved, unless it actually did (not even with a quantum computer, as far as we know for now, because of the cryptographic hashes). All they can do is maybe create a false claim. These claims are cheap to check, and in order to defeat this combined system you’d need to not only take over a section but also the majority of watchers.

Yes, the watchers would have the ability to reroute requests to another section, but ONLY if the clients / routers receiving the requests verify the claim and agree it’s actually proving the sections misbehaved. A watcher that just fools around with even one frivolous claim will be quickly ignored.

1 Like

I want to also just remark about this. There is the “real” history that happened in reality, and then there is a record of that history. If the record gets corrupted, AND people can prove it to me, then I would trust the signed cryptographic proof generated at the time the event happened, over some later consensus. In real life, wouldn’t you?

If 95% of people today tell you, without proof, what Winston Churchill said in a speech, but you have the actual video which you sure wasn’t tampered with, wouldn’t you trust the video?

So the same way, the danger is that the consensus can “forget” that A paid B, and instead record that A paid C, but in reality, it once reported that A paid B, and B relied on this in order to release some goods. Now, B’s tokens are worthless because the section colluded with A and C against B.

However, the watchers would catch this switcheroo by the section, and together with B they launch a campaign to make things right. This is what happens in real life also, when the bank ledger (the third party in a triple entry accounting system) gets corrupted by some admin, but you have a receipt cryptographically signed by the bank. Someone detects a discrepancy, and you show up with the receipt, and they figure out who got paid first. That’s what I’m talking about.

The main question is: is having watchers strictly better or can it sometimes make things worse than without it?

But why replace datachains which has the oversight mechanisms built in with a layer above that then also requires other watchers and people to control things??

Also the method destorys the autonomous network because you now have to have people reviewing the processes and if something wrong make application to the network to fix it. And this is replacing a mechanism that does this at the core level without human intervention required.

IIRC MaidSafe had a layer like that early on called “sentinel”. They scrapped it.

1 Like

Can you give me more info on this? What did the sentinel layer do exactly? Why was it scrapped?

I never really took the time to look into it all that much because I thought
I saw somewhere on the forum that it is no longer applicable to the current network design considering data chains and all. It may or may not be similar to what you were thinking.