Safe Network Dev Update - October 15, 2020


Here are some of the main things to highlight since the last dev update:

  • We’ve started running multiple internal testnets which bring together all the different components of the network. This is helping us track down issues, which we are working through.
  • The authd, CLI and API work to bring them in line with the recent client & node changes relentlessly continues.
  • We are experimenting with the metrics of a dynamic StoreCost value, making the cost of writing data to the network take multiple factors into account.
  • We are homing in on a novel use of a CRDT in a Byzantine setting on a permissionless network - read the Routing section for further details on this exciting development!

Safe Client, Nodes and qp2p

Safe Network Transfers Project Plan
Safe Client Project Plan
Safe Network Node Project Plan

The authd, CLI and API refactor continues. We now have the basic communication working between authd and CLI, with random keys being generated and passed between the two. The next step here is re-establishing the basic authenticator storage using our standard APIs (previously we had a bunch of specific APIs which added a lot of code complexity). So we’re working towards this now, which is also touching on some API use of various key types and making that more generic (so we can use either ed25519 or BLS keys, for example).

On the integration side, we’ve been pushing more fixes to sn_client and sn_node continuing from last week. With the section start-up more stable with all the updates to routing, we’ve been able to put all the modules together and see them working as one. A wee bug at Actor in sn_transfers was exposed during this internal testing, which on fixing would replace the aggregator we introduced last week as part of the AT2 transfers. Another major change that is coming is in the switch from static write cost to a dynamic StoreCost value. This means that the cost of writing data to the network would now be dynamic which takes multiple factors into account such as supply/demand of storage in the network, number of bytes to be written, etc. Clients would query the network for this amount before writing a piece of data, check for sufficient balance and pay the same. We are experimenting around the metrics for this to get started with a reasonable rate of change for the testnet.

qp2p has been getting its share of love as well this week. The async UPnP API changes have been integrated with routing and sn_node. This helped us identify a bug in the UPnP / echo service which wasn’t working as expected. The two were being done sequentially instead of in parallel and since UPnP was always failing in non-UPnP routers, the whole start-up was always erroring out. We’ve got a fix in the pipeline for this. As mentioned above, we’ve started with some internal testnets which include all the changes this week. This helps us put all the modules together in action and iron out the bugs that could have been missed in the unit / integration tests. If things go well (usual disclaimers apply!), we are on the right path to release a testnet for you guys to tinker with soon. :slightly_smiling_face:

We’re continuing a general transition to a messaging system which we saw first in sn_node a while ago. One aim with this is to break up the code in smaller, more cohesive, units of logic that are easier to reason about. Some of the other aims are to improve testing, separation of concerns, and to lay the foundation for a more natural parallel processing of internal tasks, depending less on shared mutable state. In sn_routing similar steps are now being taken, and we are looking further into where and how we can make use of these techniques to our benefit. Additional iterations in sn_node to move ahead some less prioritized parts of this transition will commence after the coming testnet. Hopefully that work will also be helped by the current work in sn_routing to make the interfacing between these layers, as well as the general flow in the system, clearer to newcomers to the code base.


Project Plan

Continuing with the refactoring work, this week we first got some work to add more unit tests merged to master. This brings back some of the unit functionality checks, and we intend to keep expanding these further. There was also a small API change to add data signing and signature verification public functions merged, as required by the upper-layer. To further improve the routing infrastructure, there is work aiming to remove SharedState and introduce Node, Section and Network modules raised and working its way through our code review process. This involves some object renaming (such as Node to Routing) and breaking complex struct (SharedState into Section and Network), which improves the crate’s readability.

Regarding the CRDT side, we are homing in on a novel use of a CRDT in a Byzantine setting on a permissionless network. This means the strict use and concrete definitions of Authorities. For us, that is Node, Client, Section and Network. To ensure irrefutability and fraud prevention we will digitally sign the Dot<Actor, u64> and the Operation itself (Add(actor)). This allows us to converge under periods of insanely high churn, which is something we hope never happens, but it can.

We also introduced a method akin to AT2 which is called DSB (Deterministically Secure Broadcast) and that may allow us to do something quite different from now and have the Adult running this DSB to get a majority to join the Elders CRDT. That offloads a lot of work from Routing. Additionally, we can see a majority have agreed on a new set of Elders (Replicas) and set off a DKG round to update the SectionChain.

The code is very simple, but the thought process is pretty deep and property-based testing is essential to confirm invariants under a full range of potential inputs. The strength there though is we have CRDT invariants and all we need to do is prove them.

All in all this is beyond exciting and should give us an incredibly solid Routing layer.

A subset of the above means all data can be BFT, plus have NetworkAuthority applied to allow it to republish (a user will pay for network authority).

Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!


First? Technically second IMO. Time to read!

Data republish/network restart, network resiliency, already beginning to be addressed!? That is very promising indeed. The level of optimization and simplification of the code base will be really helpful to newcomers. The work being done with CRDT seems like it will inspire many from that field which also seems like it will soon be a ever growing and vibrant field now that people are starting to acknowledge its promise.

Rust, AT2 and CRDT’s seem to help put this project ahead of the curve yet again. Good job @maidsafe


So, toys soon?


ooh… it’s getting closer :smiley:

Something simple to understand and stable… even if that means it’s overpriced as some 7day average… I wonder there’s no reason for spending too much time and effort navel gazing some exacting cost that might prove volatile and hard to predict.



Another solid and honest update

promises, promises :slight_smile: can’t wait

I will reread the new Routing news carefully another few dozen times and will soon convince myself I understand what its happening.

Thanks for all the hard work, folks as always


Thanks for another update and wish you great progress.


When you’re done, could you add a simple translation here? :wink:


Byzantine Fault Tolerant?

The code is very simple but the thought process is pretty deep… and I for one think it is beyond exciting that they are moving to “an incredibly solid Routing layer.”

Reads like another solid step forward… to a set of robust tools that we need.


Thanks so much to the entire Maidsafe team for all of your hard work! :racehorse:


Thx for another super update Maidsafe devs

A testnet soon is always welcoming news, can’t wait too move fast and break things :crazy_face:


Maybe time to bump this thread and fill in some blanks Safe Network Glossary and Acronyms

Authorities =
Actor =
Replicas =
Deterministically Secure Broadcast =
NetworkAuthority =
SectionChain = SectionProofChain = a sequence of public keys, each signed by the previous one, that’s updated every time an Elder is changed and acts as proof that the Section is valid.
Dot<Actor, u64> = public key of the requesting node??
(Add(actor)) = ??


That reminds me of Ubuntu bug #1

that final bug that everyone is using the old internet…


I’d just like to underline and underscore the world “all” here!

I’m going to quote The General from the A-Team here: I love it when a plan comes together. Looking forward to seeing how so many disparate yet pivotal variables stitch together to constitute a working model.

Phenomenal update! From news on random key generation to CRDT in a Byzantine environment—so much exciting progress!! More grease to your elbow, MaidSafe crew :smile:



PRoof of Authority. So a section needs a section key signature, a client a client signature etc.

A section from anywhere in the network signed something. So we check, is it allowed to do that and did it sign with a known key.

CRDT talk, for us this is the Elders. The nodes that sync the data and check updates/mutations have relevant Authority.

Yes the Actor in this case of section elders is the pub key of the elder making the claim such as

So the add new member op is signed along with a monotonic counter of the replica (Elder) suggesting this.


an incredibly solid Routing layer.

The one sentence I have been so desperate to read for the last two years.

Eventually !


Yes seriously, this is really exciting progress! Beautiful work and thanks as always for these updates :ant: :tada:


I plan on dedicating most of an episode of my 21st podcast to SAFE Network :slight_smile:


Yes indeed. Amazing update! :star::star::star::star::+1::+1:


Can’t help but wonder if I’m overlooking this. Trying to fully grasp it so if anyone @maidsafe can provide more perspective that would be awesome.

The first part reads like in a rare case of churn that the network couldn’t keep up with that this CRDT fail safe will let the network recover and still be in full agreement.

The latter part I’m having a hard time making total sense of. Just my lack of understanding of course. Trying to understand the relationship between Adults and Elders in terms of CRDT. I think clients are Actors and Elders are Replicas, correct? What does that make Adults? Sounds like Adults running this secure broadcast can help ensure we get trustworthy Elders and set up a new section with keys faster? Adults can store data and route so I have a hunch this is why it eases routing somehow. The primer being updated when there are more details here ought to be handy.


For all data except membership. Membership is a thing the Network handles. So that means Elders being responsible for agreeing who is an Elder, Adult etc.)

So this special Membership case means the Actor or folk who can mutate membership are also the Replicas and both of these are in fact the Elders. This is very interesting as it also solves another CRDT “big win if we get it” and that is the clients also being replicas for real participation and off line working. In any case that’s a nice side effect.

So Elders are Actors and Replicas in this case, we also have group consensus, so a majority of replicas need to agree on an operation. However, this group of elders is dynamic. When you think about it this is Safe Network, a byzantine fault tolerant, dynamic, permissionless network. Quite a mouthful, but this is the case for a true decentralised network IMO.

Adults, are just like nodes waiting to join, but they store data for us and do so in a well behaved manner. If they keep doing that they one day will get to become and Elder. So Adults are told about membership and Elders decide on membership.

Also Adults in a section are not known to the rest of the network.