SAFE Network Dev Update - January 30, 2020

One paper I came across trying to understand after CRDTs were first mentioned here last week is this one where they leveraged CRDTs to improve Hyperledger fabric’s latency and error handling. Others might find it helpful.


And this here has a lot of different links. Could be updated with newer stuff as well:


Awesome update on the update. I feel I almost understand large chunks of it now :slight_smile:
Its Friday night, I WILL check out the links from everyone, but not tonight.


A key thing to note here for us is clearing up consensus authority and consistency. Where are we looking right now?

So when you read the CRDT stuff you see a bunch of happy honest nodes as @urrtag points out (correctly and very importantly). So I swap out consensus with authority.

Then take our data types (mutating data). They have a smashing thing associated. That is a publicKey and that can be a users PublicKey or a section::PublicKey (we get those from ABFT/PARSEC etc. (for now I would say)). Those keys give us authority if they signed the request.

Now a change is requested to some data, probably coming in very fast and from all over the place. We want that, we want to handle that.

So the flow is - the authority (client / section) sends the operation they want to the receiving section (possibly directly). The receiving Elders then check 2 things:

  1. Does the operation have a valid causal predessesor?
  2. Does the operation have authority ( is it signed)

If this is the case each elder gets the updates, all elders at different times (no parsec). This happens fast, as fast as the network can deliver. At some point (say on read) the authority sends a merge to all Elders (getting them up to date).

Any elder with any valid state can only merge correct stuff, it cannot merge incorrect stuff, that does not work. so all bad guys can do is send nothing. We can detect that, if they send rubbish then we detect it even easier. So bad guys are pushed out, but the network is running at network speed (no waiting on the world halting for us).

This is how we can mix consensus, authority and correctness into fast changing data.

There is much more but with this when we design data and ops we check them for CRDT and then know it works. Those checks give us design considerations that are formal. So much faster to know we are correct and faster to get updates to SAFE for everyone in a provably correct manner.


Some finality on older changes could be reached though, right? Maybe once all elders agree on a merged subset of the data?



Would it be accurate to understand SAFE’s basic consensus mechanism as correctness, or a node providing the expected data for a read request, building credibility that over time is elected to authority in determining correctness and electing to authority?



Assuming absolute finality and not relative or probabilistic, how do you envision those not being contradictory? Specifically, wouldn’t current be a product of absolute finality?


A data type with finality would be current if it was in a final state, but mostly there is no final state with mutable data. i.e. a data type that has several defined states could be seen in it’s final state and that would be current, but nohing else. Immutable Data Put is a single and final state in those terms.

Yes I think you can say that, but this is for nodes on the network. Client actions have the authority they need when they sign a mutation event. Elders gather authority via message passing to aggregate a bls section key (each has its own key part) and that is reaching consensus/providing authority on events, but not providing order to those events. The data types being crdt will provide the order as part of the type itself.



Could that be modeled as an enum in Rust?

And for my clarity please, is it true that the statement I quoted is refering to immutable data?

Does all Elder authority originate from client actions, or is authority also generated by the network?

Thank you for being here and answering questions.


Elder authority is used for network infrastructure (so adding nodes/promoting etc.) and also in some client actions. The latter depends on design, so say a client pays to do something. The elders of the client could take payment and then authorise the action by applying their signature as well as the clients.

It could be, but more a state machine where the states transitions are one way only.

enum thing {
  start: bool,
  second : bool
  third : bool
  final : bool 

So that enum would only allow state changes or actions that increment the state to a final state, if that makes sense?

The other/general finality thing in distributed networks is kinda weird. It assumes a finality to a decision / action in many cases and can be getting all Elders to agree to sign something and saying the finality is they will agree and by X time/period.

So we have finality of state transitions of “stuff” but also the finality of a network action or transaction (i.e blockchain finality after 6 blocks). With the correct data type the finality you get is weird, so you can say you can see a transaction (you spent cash) when you see your balance has spent that cash (so that transactions was finalised) but we also recodnise your account/balance is never final unless the world stops :slight_smile:


Thank you @dirvine

@drehb would your definition of finality be compatible with that? I hope also that I’m not stepping on your question.


This part is where digital bearer certs are really interesting. Rather than a client waiting on Elders authorising an action payment, the client can bypass them and send the payment with the action. @oetyng and @danda have some really nice findings in dbc so far. It would be a significant speed up. More than that though it opens up the design space to give us a ton of huge opportunities and I think makes SAFE easier to work offline for a while if needed or clients to go into super sleuth mode or even use different networks and so on;-)


That’s what I was thinking.
I was wondering how this eventual consistency would relate to tokens and payments.
In Bitcoin we don’t actually have finality, just probabilistic finality…ie the deeper into the chain the transaction goes, the more improbable (economically) it is that the transaction would be reversed. (A wipeout attack is one in which a large number of blocks are ‘overwritten’ by a newly published chain with greater work)
In Ethereum2 I think there is supposed to be real finality. Blockchains with checkpoints I think would also be final up to the checkpoint.


Another huge change. Want to change consistency and finality by availability. I recognize that, for a system like Safe, seems a right choice but I wonder how could affects the launch time.

In the end it will be necessary to decide the basic of Safe and stick to that plan at once. Otherwise it will never come to a launch.

Good video about this subject


I get your concern, I would have the same concern, but it’s not as clear cut as a change here actually. I will try and explain.

We have strong consistency in routing wrt Elders/age etc. and that’s what we go with. I have not been fully comfortable at all with it (as our team has known for many years now), but it can be forced to work, albeit slowly, however, it’s fast enough for now. Engineers can be very scared of crdt, but then you realise they have never really read up on them. The team now is different, they do look at the state of art in decentralised networks and we are making use of that now. (A nice thing we found this week is our internal slack direct messages dropped from over 40% to 10%, so now we are much more a team that shares openly).

Our data types have more loose design considerations and that leads to falling back loosly into CP territory, this is more concerning as we really need data types that can be concurrently updated. Also I want to make sure offline clients can keep going. More so we want to do this in a hostile network that suffers from significant numbers of nodes coming and going.

So this should not slow launch at all, but make things more concrete by following the requirements of CRDT types. What we are talking about here is being able to enforce a formal set of rules that we can tweak our data types to follow. Man do already follow some of these but loose around the edges. This cleans up these edges and allows the Engineers to know something will work if it obeys crdt.

I hope that makes sense, not a design change/swap out but more a finalising things with rules.

Who knows what the future brings, data chains etc. were based on this, but they had too much resistance in the team at that stage. Life and tech has moved on to show clearly provable & mergable types. Therefor a we finalise the data types, as you can see happening now, we will try and enforce these patterns, otherwise we shove everything through parsec for the short term


I suggested a global counter a while ago that may be somewhat related:

It was about a counter that’s maintained by each node:

  • Its current value is sent in each network message.
  • It’s incremented once almost all neighbors caught up.

I expect (though can’t prove) that these simple rules would result in a global point of reference whose value would be within a small margin across the entire network.

I’m not sure if this is really connected to the question at hand but I thought it may be useful to bring it up again in case it can help. It’s also simple enough to run some test on if somebody is interested.


Hi @JoeSmithJr,

I would be very interested to see such tests, so if I’m reading this right and you are offering to do it, then it would be fantastic to see. Much appreciated :slight_smile:

I personally consider a global counter for some sort of “time” reference to be useful to clients, even though the network core itself is not in any way (and preferrably should not be) aware of or dependent on such.

The specific feature you suggest, is not exactly connected to the question. But the implementation of it very much is, and I think it would be a perfect case for using it.


I already did do some testing back then, the code is at as you can see from my original post. It performed as expected so maybe you want to look more into it if it sounds like a good idea. I abandoned it at the time because I was told it’s unnecessary with PARSEC but now it seems you’re looking into simpler and faster forms of eventual consistency as well, so I thought maybe I could revive the idea.


OK, yeah now that you mention it I remember it actually. Great, I will have a look again.

For example, I would see this as a CRDT for converging on how many seconds have passed since genesis (if we drop 1970, and define SAFENetwork launch as the new point 0 :slight_smile: )

So, if you’re in a cave, or out in space, or some other kind of disposition/inconvenience, and just want to know what the current approximate time is (you yourself adjusting for any latency you have), then you could just query that counter.

Just as with data, it will never be wrong, it’s correct for some point in time not too far away, and you’d have to figure out yourself the confidence interval levels etc. and to what degree you can have your tools rely on it.


A bit like that, though I wouldn’t expect a stable pace of ticks, definitely not on longer timescales. However, it could serve as an oracle-free timestamp for signatures as long as all we care about is establishing age and order with a coarseness and probability that the estimated frequency and global variance allows for.