SAFE Network Dev Update - February 7, 2019


Here are some of the main things to highlight this week:


Subscribers will have received the January edition of the SAFE Gossip newsletter (reminder: you can sign up in the footer at the bottom of the website). We’ve also published the video of the recent live Q&A on the SAFE Browser on YouTube for those of you who couldn’t join in on the day.

Part 2 of @dirvine and @viv’s SAFE Crossroads podcast with @fergish on the Fundamentals (reminder: you can hear Part 1 here). In addition, @dugcampbell was on BBC Radio Scotland this week talking about the Quadriga Exchange cryptocurrency issues whilst we were all watching the first SOLID World monthly online get-together earlier this week. Other team members were also out and about at FOSDEM over the weekend, we released a new post on the importance of open source software to the SAFE Network (all claps gratefully received as ever!) and we experimented with another tweak to a video update format (all suggestions to @sarahpentland and @Cgray!).

Finally, @dgeddes has a proposal which you can read all about here. Basically, we’re suggesting that it may be a good idea to wrap the Dev Forum (and its history) into this Forum. There are a number of reasons behind this which you can read about but, in essence, we believe that having everyone in the same place may help us to build on what we have. Please do join in the poll if you have a view.

User Experience

Work continues apace on the next iteration of the website: atomic design, enhanced layouts, improved typography, and more efficient code. All part of the bread and butter of web design. It never stands still does it?

We also continue User Research and testing on the SAFE Ecosystem; now part of our weekly routine. On top of learning the specifics of how people interact with our designs, it’s a great opportunity to be able to learn more about their needs, as well as having the pleasure of explaining what SAFE is all about, and what it will enable.

And on a similar note—as part of Safer Internet day—UX Designer @jimcollinson had the pleasure of presenting the SAFE Network to the Nottingham Green Party as part of a panel discussing Human Rights, Privacy and the Web. A fascinating evening, and one that hopefully goes on to influence their policy and activism.


The team is wrapping up the planning phase of the next safe_app_java milestone. Armed with the new project documentation that is in progress, @vigneshwara has wrapped his head around the code and will start the first sprint early next week. We have split the UI/UX enhancement milestone for the SafeAuthenticator mobile application into three sprints. The code from the first sprint has been merged into this branch and is currently under QA testing. The progress of the milestone can be tracked on the project board.

SAFE Client Libs

The SAFE Client Libs team has been attending calls with the frontend team to figure out how we can assist them in preparation for SAFE Fleming. While the rest of the backend team continues to work hard on the Fleming release, SCL will need to work to aid the frontend team in two ways. The first is by providing the building blocks for supporting the semantic web, including the ability to store RDF triples on the network as well as querying them with SPARQL. The goal is to eventually provide an API, or at least a preliminary design for it, so that the frontend can begin creating apps and demonstrations of how SAFE works with the semantic web.

We also attended the first Solid World meeting where we saw the work being done with Solid and how people are building applications with it in the real world. This will help us to understand the needs of Solid application developers when we begin designing the APIs. There are still interesting problems and unsolved questions in the Solid world, such as when to design your own ontology for a problem space and when to use an existing ontology, which may not support all the data you wish to represent but will make your data more portable.

The second thing we have started to think about is supporting a network where all data is immutable. One initial idea is to replace Mutable Data with Appendable Data, though this is yet to be decided on and the ultimate mechanism will be chosen for compatibility with the Network and its fundamentals, rather than an adherence to existing APIs. There are many questions to be decided with the move to immutable data, as it has implications even for things like encoding the latest version of a SAFE site in a URL. We will first need to gather requirements and only later will we begin thinking about designing an API, which we will likely design and test in parallel by migrating our current SAFE App examples away from Mutable Data (to e.g. Appendable Data).


This week a large part of the Routing team is taking some well-deserved time off, so the pace of work will slow down a bit for a week or two.

That being said, the Routing team continues to split their efforts between the Fleming design work and improving Parsec.

On the Fleming front, we are still investigating the influence of section sizes and other parameters on Sybil attacks. We have some preliminary results that seem to indicate that the section size is the main factor affecting Sybil resistance. Networks with sections of similar sizes, but wildly different numbers of elders per section show similar levels of security in the simulations we are performing. The simulated networks were static, though (no nodes joining or leaving) - we are currently working on including factors such as network growth in our results.

On the Parsec side, there is ongoing effort to improve the performance of the code. We have a few open PRs that will give us significant improvements in this aspect.

Last but not least, drawing inspiration from the POA Network project and their implementation of the Honey Badger BFT algorithm, we have figured out a way of making Parsec truly asynchronous and have started to implement this solution.

Ever since we published Parsec, it had one point which made it hard to define the true synchrony assumption it requires, and that is the concrete coin. We had decided to use it instead of a proper threshold-cryptography-based common coin, because all common coin solutions known to us were complex and required a synchronous setup phase with a trusted dealer every time the set of members participating in the protocol changed. There were DKG (Distributed Key Generation) algorithms we were aware of, that eliminated at least the need for the trusted dealer, but other problems remained, such as computational cost.

@AndreasF, who currently works with both MaidSafe and the POA project, directed us towards an elegant solution to these problems that has been applied there. When the set of members participating in consensus changes, instead of running the DKG protocol normally, which would require the network to function synchronously for a while, the DKG protocol messages are being fed to the existing instance of the consensus protocol as transactions. The consensus protocol outputs them in a common order, which de facto works as if the messages were exchanged synchronously. This allows for the generation of the keys for the next set of members in a truly asynchronous fashion. These keys are then used to construct a proper threshold-crypto based common coin.

As mentioned before, the POA Network project implemented this for Honey Badger BFT. We are currently in the process of porting this solution to Parsec. Because they have already implemented all the primitives we need, the code complexity cost is vastly reduced for us. The scheme they developed uses Boneh-Lynn-Shacham signatures, which practically removes the overhead of crypto primitives compared to other existing schemes. The POA Network people have been the most helpful, already having made some improvements to their code to help us satisfy some requirements we had in Parsec.

This is an exciting time! When this work is completed, Parsec will at last be truly asynchronous, no strings attached. We will, of course, have to test this solution first and confirm that it’s behaving as expected, but once this is done, we will have made a big step forward.


With @povilasb being off to FOSDEM for two days, the pace had slowed down a little. Before diving into the roadmap, we wanted to finish the bootstrap cache implementation. We are still refining our thoughts on how Crust should handle that. Bootstrap cache is crucial for seamless network operation, especially restarts, so we want to get it right. Besides the fact that we hold contacts of up to 200 most active peers, Crust will also try to periodically validate if those peers are still alive and their encryption keys are still valid, etc.

We had looked into the study of how Skype managed restarts in the past and have retained much of that, as it had proved successful for them, including the 200 limit of most active peers. In order to not exhaust open socket limits we were wary of doing 200 parallel attempts, but following some early testing, limits on OSes are greater than 200 so we will continue with the implementation. We will also use our LRU Time cache as it will make sure the nodes at the top are the freshest and most probably the ones the upper layer (Routing in our case) contacted us directly with.


Oh snap!!! That’s awesome and @AndreasF strikes again!!!

Also cool as hell :smiley:

Addon: I should also say huge props to those from the POA team for working in an open-source, collaborative fashion. It’s not seen enough in this space I think. If anyone else from POA reads this, thank you. :blush:


Second! :v: Now to read …


This is amazing! Great job team! The SAFE NET train is getting faster…


84 posts were split to a new topic: Appendable Data discussion

An impressive array of progress, but it is a little depressing to hear that Fleming will be losing some focus to improve Parsec. I understand the long term importance of this, but alpha 2 is getting rather long in the tooth.


This means we have BLS keys in play though. So much better authority chain (much simpler) plus a powerful mechanism for multi-sig etc. So not all for fleming, but a big win when we go down the Maxwell rollout. There are many new patterns open to us with this actually in routing and vaults.


I’m guessing this should help speed up PARSEC’s performance also?


No figures yet, but I hope so.


the holy grail :ok_hand:


Cool update! Not as hell (:hot_face:) I think; Maybe some special place there :wink:

Reminds me of the ELI5: ZFS Caching talk @FOSDEM last weekend (LRU explanation starting at minute 7). The Solid and Tor talks was also interesting.


Cool update,
There will be a special place in hell for those who do not plan ahead thoroughly…

What am I missing here?


Maxwell is the step after Fleming in the roadmap.


Ooops - I actually stopped looking at the roadmap cos I was telling myself " They are working as hard as they can. I’ll take the updates I see weekly and leave it at that"

Thank you :slight_smile:


Come on Fleming…:muscle::+1:t2::innocent:


There is a feeling after reading all this that Fleming will be slowed, you (kind of) have control of your data but not entirely and if certain steps are either made or not made, it could serverly impact adoption of both individuals and companies based on cost. While a finer tuned project is always great, from the layman side most of that sounds like not a positive outcome.

This is post Fleming, Fleming will have no data. It is a routing only network to show, sharding, abft, chains etc. Basically a secured autonomous network foundation. Maxwell will follow, where we can add in functionality we already have in code and tested. That part is where the work is going with part of the team right now to confirm all there will be smooth. So the Fleming work is very much backend driven, which allows the front end guys to focus on what comes next and can we do those next parts quickly and in a managed fashion. We are all looking forward to getting to a more timscale driven roadmap. I hope Fleming answers the last of the big unknowns, except perhaps for upgrades. But I think we have some great ideas there as well.


Okay, this update is dividing me emotionally. There are some really exciting things in it but also some things that concern me. Let me iterate through these:

Very delighted to hear that! Also very exciting to hear that you have found a solution to potentially make PARSEC fully asynchronous. :slight_smile:

Maybe it’s just me but as an excited +3 year follower, after many emotionally ups and downs I currently feel a little bit confused and lost on what’s left to be done in terms of really huge unknown problems to solve. I think this comes mainly because of phrases like these, that might stir up wrong expectations (from the introduction post of PARSEC):

Maybe let’s analyze the big milestones, mentioned on the timeline:

  • Integration of PARSEC in a dynamic permissionless network
  • Introduction of Disjoint groups with secure message relay
  • Enabling disjoint groups to merge and split whilst maintaining consensus
  • Secure Message Relay
  • Integration with SOLID

Is this a complete list of major items that need to be finished in order to launch FLEMING and if so, what is the level of “unknownness” of these items?

The second part that concerns me is the somehow heated discussion about Appendable data. I have the feeling that we decide here on very big structural changes although we have very little insight in how the network evolves once it is launched and how it will be getting used. Even if you include the community here on this, we are still a very small group right? As an alternative way I would suggest the following path:

  1. Put all resources behind launching a first iteration of the network
  2. Once it got some traction, introduce some kind of node/user voting technique (similar to bitcoin)
  3. Let the members of the network decide on such major design changes
  4. Iterate network versions through “hard forks”

I don’t know how feasable that is but it certainly would support the decentralized paradigm which I think in the end unites us all here.

That being said…maybe after all it turns out that I’m too impatient and we’ll have a more detailed timeline soon?

Still, until then it would be very interesting to hear your thoughts on my questions / suggestions.


These are all done (SOLID is ongoing). The last big thing we need to finalise is network upgrades. These are likely to start with
a forceful mechanism and evolve to fully network aware upgrades. That does not stop the launch, but makes it easier I feel.

These are not big structural changes, at least not from our perspective. It is cleaning up what we have and hopefully making that clear, which we have not made clear enough so far, as the discussion show.

Hope that answers your questions?


Thanks David for the fast response!

With “done” you mean done, in terms of conceptually solved (no unknown left) but still needs to be implemented?

Okay, thanks, then I might have misinterpreted this as well.