SAFE Network Dev Update - July 25, 2019

My perspective is that there is a strong use case around personal data storage and consent management. This is with a backdrop of changing personal data protection regulations, shifting consumer attitudes and behaviours around data privacy and more organisations taking a strategic perspective to bring verifiably trustworthy products to market. Forward looking companies are putting more focus on baking privacy and security into product architecture. As such, SAFE (as it is progressing and is anticipated to evolve) will cover many bases. It aligns to broader considerations I know are being factored into everything from national data sharing standards to new products in the process of being brought to market. At a high level the below are part of the consideration set and SAFE is on the radar.

Technical enforcement
From this standpoint SAFE won’t solve all of the problems. However, more ambitious technical approaches like SAFE promote more effective data protection practices. They support compliance initiatives and enhance Data Ethics Frameworks.

Reducing cost for businesses
Through more effective data storage methods and consent management, businesses and application developers can reduce both the cost of data security and compliance associated with their data processing activities.

Reducing risk for individuals and organisations
Centralised data is attractive to bad actors. Most here in the forums know this. Decentralising data greatly reduces the risk of data breaches, whilst, as above, decreasing the cost of data management and compliance. This results in more available budget for higher value customer and business activities.

Lower cost of enforcement
By reducing the data management and compliance burden on businesses, and mitigating risks associated with personal data (breaches, unethical and illegal use etc) having orgs building on top of SAFE can significantly reduce the monitoring surface required by regulators. While many decision makers in policy or legal practitioners are not known for being forward thinkers, I can say from my experience working in this area they are at the least discussing it as part of the consideration set.

Depth of individual control
SAFE puts the individual at the centre of their data sharing ecosystem.
This aligns to the purpose of current and emerging data privacy and protection standards and frameworks by empowering people with their data.

Reducing information asymmetry
SAFE can limit the likelihood that information monopolies flourish. It can enhance democracy and support optimal market functioning. This is an area of significant relevance right now and is recognised as something that needs to be addressed.

Data portability and data mobility
Current technical implementations make such outcomes difficult and costly for individuals and organisations. Building on top of SAFE theses outcomes are embedded into the design. It solves many problems that arise in these areas in one swoop.

It can definitely be argued there are a plethora of other projects and well established proprietary solutions out there (IOTA, Blockstack, Holochain, Microsoft’s decentralised identity, Forgerock with UMA, Digime etc etc). It is no doubt a long list. Last years Privacy Tech Vendor Report from the International Association of Privacy Professionals ballooned on the previous year. But I would assert they will not be able to compete.

The ease of which large organisations can address many of the technical data protection challenges with a simple SAFE API and some creative UX will quickly be realised once we are live. Particularly with the clients I am working with. Think of the capabilities of SAFE when part of a repertoire of trustworthy Open Source tech that companies can start incorporating into discovery work.

Yes, it is still early and this future many of us are actively working towards is not certain. Post Fleming when Maxwell makes a mark I am confident more organisations will seriously start looking at it. And the confidence is growing every product update :slight_smile:

In saying all this there is still a long way to go before political, legal and business thinkers and decision makers come around. They are risk averse in the public and private sector. But it may only take one bold strategic move from a large company (or country) to shift the market.


It works… 20 char


When do we get a sexy quic-p2p robot…I’m sick of this old one.


You don’t think CrustBot is sexy? Where’s your imagination, bro?


I prefer
“The Perpetual Web on the Impossible Network”


To the layman, I think that would be confusing and redundant, also too wordy. Need something simpler.

Wow just wow, I’ve watched The Perpetual Web video over 50 times. :exploding_head: this is my fav video of all Maidsafe vids ever made. Had to mention this first…

Great update Maidsafe devs.

Just a small example: Don’t be surprise if a Tesla taxi refuse to give you service if you offer it $€x¥ fiat and it plays you this track.

Likely it will prefer SAFEcoin to buy storage (to create it’s fleet, publicid for it’s tipping address and it’s service page etc). Also don’t be surprise if this software is coded up by an bored student, Tesla the company might be surprised that it can’t control it own cars (no surprise they don’t operate in the clearnet ecosystem/economy :crazy_face:). No need to get into SAFEcoins distribution model or scarcity in comparison to fiat’s infinity inflation mode. People will find out eventually when everything they do/say gets censored on clearnet.



I really like the video @JimCollinson (anyone else involved I should tag?). It’s got a chill and natural style, while feeling absolutely professional. Just about right length as well.

Looking forward to the coming ones (I heard you allude to them, did I not? :wink: ) showing wallet use / shopping cart, webids (I would love to see some real weight in it, like don’t be afraid of going into the dark zone of oppressive regimes or what not, well if you feel like it, it could be heavy), etc. If you can make a series covering these other aspects as well that would be amazing material to share with people.


Application needs timing because it is business.

Software frame work only needs overwhelming tech.

P.S Google cant make Linux, and Samsung even cant make Android. The number of dev is not core thing for software frame.


I have big expectations with this project. From the heart is the internet that I want for myself and for future generations. :grin:


Thanks! Yeah, I do plan to make more. There is a lot to be said.

No schedule for them though, as they have to fit in around the ‘day job’ of building the thing! But yeah, I’d love to do more, and perhaps go a little deeper, but still keep things non-technical.


Hey folks, wasn’t sure where to put this but was wondering what’s up with merging? I’m assuming this is referring to section merges, correct?


The network will not merge sections now. With the design of Elders and many adults per section we would need to lose nearly 90% or more of data for a merge to occur. So we don’t need to merge sections now, unless we lost all that data and that is end of network and breaks fundamentals. It also allows a linear progression of prefix management whereas merge would require a more complex mechanism for holding section history and so on. So the merge code (a nightmare) is not required or cannot be used. It allows us to move faster as well not having to do that.

tl;dr the network can lose over 80-90% of nodes and not have to merge, but a loss of that proportion means significant data loss and that would be the end of the network. Later with archive nodes this may alter a bit, but only if it can be altered and still protect data.


atm we are working with 200, but that is likely to change a bit in tests (I suspect 120) . So 7 Elders and all the rest are Adults.


I’ll take the opportunity to ask a couple of things related to this, only if you have time of course.

  • With current information, the live network goal would be 200, and 120 for tests, or is the idea (right now) that 120 might be the aim for live as well?

  • 7 elders, is that with the 200 or 120 aim?

And now a couple of more difficult questions I believe, any sort of answer is appreciated.

  • Is there some rough idea what the median section size would be (with any of those aims)?

  • How does the elder ratio tend, is it constant throughout the section size growth, or any other type of behaviour?

  • Elder membership changes should be a function of the rate of change in elder population, which I assume also is a function of the rate of change of general population. But there is also churn, which is (possibly?) based on other events than membership, but in the end regardless of event type will be somehow related to growth rate as well as general activity, I guess. The question: Is there some way we can conceptualise the BLS key update rate? I try to figure out how it can be modelled, and any insight into how we can frame the variations would be awesome.

Greatly appreciate it.


120 for live as well, over 60 per section puts us in sybil safe territory

Yes 7 in total.

Not sure I get this, the sections will go from 60 → 120 split → 60 → 120 etc.


Section keys will only be affected by Elder churn. Elders are the oldest so churn the least if that helps. Every Elder churn requires a new BLS key signed by the last BLS key. THere is a small chain of such keys. We only need to keep the longest chain that represents the oldest section we know of that we have not updated yet (see secure message delivery rfc for that).

Hope that helps, it’s dev update day :slight_smile:


How many of the probably starting 60 (after split) vaults can be lost (without replacement) before there would be problems? E.g. can it go to 20?

1 Like

Depends on how much data that section has really. As vaults fill up new ones are attracted to the network. This is why we want to push to beta ASAP and tweak/Analise this kind of thing. Also archive nodes make a huge difference. Fro launch we should be OK though if safecoin works as intended.


Any progress on a segmentation of the internet (eg country goes “offline”) and what happens if there is a sudden loss of say 3 elders?


Isn’t it more “beautiful” to have binary divisible section sizes? 256 instead of 200, 128 instead of 120, 64 instead of 60, etc…