SAFE Network Dev Update - August 30, 2018


Here are some of the main things to highlight this week:

The SAFE Network Fundamentals

Privacy. Security. Freedom.

Three words that mean something to everybody. Everything to some of us. Yet slightly different things to many of us.

Today, we’re releasing the SAFE Network Fundamentals. Collectively, these concepts have driven every stage of the design of the Network ever since the project started back in 2006. In the years that have followed, the SAFE community has grown and we’re delighted to see just how many people and projects have come to recognise the vital importance of a global, secure and private decentralised platform for storage and communication.

As the numbers who support the project grow, it’s crucial that every one of the core team, the SAFE community and the many who join the project moving forwards are all speaking the same language when discussing the goals for our future. In this way, we should be able to minimise confusion and help everybody to understand the scope of the vision, no matter what their background might be. We’ll be going into these points in more detail next week, adding some additional context for newer members of our community.

The SAFE Network Fundamentals

The network will:

  1. Store data in perpetuity.
  2. Allow a person to create an account and login anonymously and without intervention (it may require a payment to do so, which may also address account creation spam).
  3. Allow users to associate multiple identities (key pairs) to their account.
  4. Allow users to securely and with no controlling intermediaries share information and messages.
  5. Ensure client-to-client direct messages are free. Client-to-client messages involving traversal through the SAFE Network will be charged.
  6. Allow users to anonymously create and share data worldwide.
  7. Enable anyone to browse content free of charge.
  8. Allow users to use any of their identities to send/receive safecoin.
  9. Allow any users on any machine to use the network, but leave no trace of the user on that machine.
  10. Allow transfer of safecoin to any other user free of transaction costs.
  11. Not use any clearnet service.
  12. NOT USE time as a network tool (nodes may use local durations).
  13. Allow real-time upgrades in a secure manner (i.e. the network will refuse upgrades that could break it). This requires significant effort and nodes may run upgrades in parallel to existing working code before upgrading.
  14. Require no passwords stored on the network or on the client machines that access it.
  15. Not have servers (in the usual definition of a server).
  16. Only use encrypted services (no clearnet, perhaps except for initial retrieval of core code (https)).
  17. Scrub all client IP addresses from hop 1 of our overlay network.
  18. Only accept more vaults when it requires them.
  19. Increase farming rewards when it needs more resources (Vaults) and decrease rewards when resources are plentiful.
  20. Rank nodes over time and increase trust in higher ranked nodes.
  21. Digitally sign all transactions.

Safecoin is the unique incentivisation mechanism built into SAFE and on launch will be distributed as follows:

  • Pay the creators of the network (MaidSafe shareholders) on launch (5% of total Safecoin)
  • Pay the crowdsale investors of the network on launch (10% of total Safecoin)

Safecoin will be distributed on an ongoing basis:

  • Pay Vaults for providing service (85% of rewards)
  • Pay developers who produce apps that people use (10% of rewards)
  • Pay the maintainers of the network code (5% of rewards)


The main event this week for the Marketing Team was the SAFE Network: London meetup which is back up and running, led by @opacey and the team at Cryptonomy. A small but perfectly-formed turnout listened to an introduction to the SAFE Network from @dugcampbell before @pierrechevalier83 dug into PARSEC. You can check out the video of Pierre’s presentation here. It’s fantastic to see the Meetup back up and running again after a short break - so if you’re in or around London around the last Wednesday of every month, please do sign up and get involved.

Whilst we’re on the topic of meetups, a quick heads-up that the date of the next SAFE Network: Chicago meetup has now been set by @Sotros25 - Saturday, September 15th with the topic of ‘Exploring the Front End’.

As mentioned in the last update, we’ve made a few changes to the PARSEC Graphs video which have unfortunately delayed things by a few days but this should now be released on Tuesday, September 4th. We’ve also been working on the next stages of the plans for the iteration of the website and we’ll be feeding back on our plans for this to you all, hopefully before next week’s Dev Update.

The DigitalOcean/Network wipe issue has also required time this week. Moving forwards, we’re really keen to see what the community chooses to put on the Network from the perspective of highlighting the involvement, enthusiasm and ingenuity out there. So, as the Network is populated once again, please do let us know what you’ve put up there - we’d love to help others see what’s out there in talks etc that we’ve got coming up.

User Experience

This week has been all about process. It may not seem very glamorous, but it is all important. How we build the right thing, and how we build the thing right.

What we’re now constructing are the systems, tools, and interfaces that will be your daily experience of using the network.

It’s a challenge no doubt. There are a lot of moving parts, multiple platforms, and also some brand new mental models for users to get to grips with.

It’s funny, we’re building an autonomous network, to give autonomy back to people. So the bridge between the machine and the human–the interface–is critical but also, paradoxically, entirely unimportant in and of itself. It’s what it enables people to do, to create, and what it allows them to be, that is the real prize.

Building the thing that allows users to be their best selves takes collaboration, understanding, hard work, failure, re-work, testing, learning, researching, rinsing, repeating … but above all else: 100% focus on the needs of people.

This is the design process. It’s something you build unique for a team. And its also something that is pretty exciting, especially when you think about the implications. It’s like the feeling you get taxiing to the runway for your first flight.

Buckle up, folks. It’s happening.

Oh, and while we’re here, a little shout out to @shankar as he jets off for his wedding next month. Have a blast. And don’t worry, we’ve got your code covered.


We’ve been chasing down a series of issues as we move towards the next Peruse browser release. With an authentication hang in Ubuntu leading us to fix a separate issue for opening devtools on that system, as well as clarifying some other issues related to the Peruse background processes and safe_app native library loading. In the end, our issue was related to the opening of the generated authentication URL in the browser on Linux, which we’ve now got a fix for.

We’re now moving on to set up a more consistent release process for our frontend projects, and we’ll be taking this Peruse release as a test for this.

This week we finished the .NET Developers getting started guide and a pull request was raised to add these changes on the DevHub website. The pull request is currently under review and will be merged soon.

We also went head to head with the Android JNI bug that has been blocking the Android tasks. With the help of @nbaksalyar, we were able to set up a configuration that allows us to debug native code being executed on Android devices. With this help, we should be able to track down the bug and put out a fix soon.

SAFE Client Libs

This week the Client Libs team finished upgrading all of our Rust crates to use the most recent version of Stable Rust, while making some maintenance changes to our crates while we were at it. We also thought a lot about how to improve our processes so that they are less time-consuming in the future.


This week, we’ve started picking off the first few tasks towards Parsec milestone 2. This has allowed @jonhaggblad to start becoming more familiar with the actual Rust code and how we manage Jira tasks, pull requests, etc.

We prioritized tasks that mostly had to do with code refactoring that we can now see with the benefit of hindsight as they are the kind of tasks that will help solve subsequent tasks faster. So we covered MAID-2966 which allows us to use a more defensive style of programming; MAID-3042 which removes an opportunity for malicious actions by increasing type safety; MAID-2991 which is a simple rename, but easier done now than in the midst of implementing many new features. By tackling MAID-3021, we looked into a neat way to measure performance which will definitely come in handy when we start optimizing performance, later in this milestone.

One of the tasks we’re looking forward to hitting is to provide a parser for dot files. These are the text files which our tests can already produce and which can be used to generate nice SVG files showing the gossip graphs. These visualisations are pretty much essential to understanding Parsec and debugging issues. However, we think it’d be a good idea to write a parser for these dot files which will allow us to easily initialise specific edge cases for tests.

Say for example we want to test the function strongly_see. The tedious approach would be to just create a random network of test nodes and let them run for a while in order to build up a gossip graph. We could then pick one of these random events and base the strongly_see test on that event (check that it can strongly see a particular other event, and that it correctly can’t see some other one). The problem here (as well as being tedious) is how do we know which events would be suitable? We’d be writing code to detect strongly-seeing inside the test so that we can then go ahead and test our production strongly_see function. This test code could be equally as complex and error-prone as the production code!

A much better approach is to initialise a single Parsec instance with a hard-coded gossip graph which we already know in advance. We can then know in advance what events can strongly-see others and picking the events to run the tests against is now completely trivial! So, the best way to provide that graph to Parsec is to write a dot file and have Parsec read it and turn it back into a gossip graph. An advantage of providing a dot file is that it is easy for humans to write as it is practically plain text, but can also be visualised clearly by generating an image from the file. This will allow us to create tests that we can easily reason about.

This is just one example. Many tests will probably make use of this new functionality, and we can expect to create edge cases relatively easily which will be a far superior approach than just soak testing existing functional tests and hoping that eventually such edge cases occur in a test run. We’ll also be well placed to write simple regression tests. If a functional test fails, we can simply take the dot files generated by that failing test and use these to initialise Parsec instances in a new regression test.

In addition to working on all these goodies on the implementation side, we had a nice experience interacting with the community as @pierrechevalier83 met a few members on Wednesday at the SAFE Network: London meetup in London. He gave a presentation about PARSEC. The video is now on YouTube here in case you couldn’t attend but are still interested in seeing the talk.


This week we did some changes to our droplet deployer. First, you will now be able to use the droplet deployer with your own DigitalOcean account, if desired :slight_smile: (see this PR). Then, we updated the deployer to support the latest Crust as it imposes changes to the way Vaults are deployed: now they are deployed sequentially and after each Vault is started, Crust encryption keys are extracted and put into the subsequent Vault’s hard-coded contacts.

Lately, there were more changes in safe_crypto that changed the interface of this library. Hence, we had to update Crust to adopt these changes. Then the next step before integrating Crust into Routing was to integrate safe_crypto into Routing. It’s almost done but still going through our review process and should be finished within a couple of days. In the meantime, we started the integration of the latest Crust into Routing and expect to be running end-to-end tests next week :slight_smile:


4 posts were split to a new topic: Are Safe Vaults servers or not: discuss

Juhuuu I am for the first time first :smiley: Have to read now

Edit: Amazing update as always! @pierrechevalier83 you will make it next time.


First time I tried to go for first. Second isn’t too bad, I guess :smiley:


wont believe it till you are strongly seen by at least a supermajority of validated observers :smiley: :smiley:


Thursday! Again already! You know… progress is getting faster and faster.


Great update and glad to hear the London meetup went well!! Also excited to see The SAFE Network Fundamentals laid out. This will definitely serve as a great communications guide for the community and lamp post for marketing efforts too :smile:


Really like this quote, and funnily enough just read a fascinating article on some of the dangers of taking too much autonomy away from people.

Another great update, feels like everything’s really starting to come into sharp focus!


Listened to Pierre’s talk on youtube last night - really good!

I know this has been part of the plan all along … wondering though how it will be accomplished. We’ve talked on the forum a lot about farming, PtP/PtD, but I don’t recall a discussion on how to pay the network maintainers … I’m guessing some sort of mini-safe-git will have to be created in the Safe code itself and then accepted PR’s get rewarded? Any thoughts on how this might work?


Great to see the “Fundamentals” laid out so clearly—thank you for that.


Are these going onto the new website? Would be great to be able to link them to noobs in a convenient location.

1 Like

We’ll done. Perfect intro, well thought. This is a great idea and a must have for presentations.

Couple points.

Could there be “basic fundamentals” and “advanced fundamentals.”? For many non technical candidates, which would be the greater of the potential userbase, some of these descriptions are way above the average users paygrade.

As you introduce the fundamentals with “The network will:” Some are specific to users, some are specific to the network. Possible seperate/ catagorize or flow from user to network and identify user or network.

E.g #2, 3, 4, 10 (and some others,) are specific to users #17, 18, 19, 20 (and others) are specific to network.


That’s a nice idea. Organize it better and it will be easier for noobs to digest.

On another note, been thinking about that 5% for Safe Network core developers.

When Maidsafe first launched the project and the token, that seemed like a reasonable split … but now, I imagine that safecoin will end up with a multi-billion dollar valuation, which means that this 5% will be worth a super-huge amount.

While that may seem good on the surface as it will encourage many people to develop the network, we can also see by looking at other projects that money causes a lot of in-fighting and division … sometimes leading to hard forks.

To my mind (mental estimations on the value of the 5%) … this will be an unprecedented amount of money to fight over (my PR is better than your PR) … to do an in sentence acronym swap … that could be bad PR for the project.

I wonder how people feel about this. The market has changed since the early days and much could be gained by have a hard look at the safecoin allocation/ breakdown again IMO.

1 Like

1000 Puts! thanks for the increase :grinning:

1 Like

Great Update

Although one point that might need looking at, especially when you include PtP in the future (even if only a trial)

This is not what the RFC and previous discussions mention. It maybe approximate but not correct.

Farmers are paid at 100% of farming rate
app devs are paid at 10% of farming rate
maintainer devs are paid at 5% of farming rate

That gives a total of 115% of farming rate that is paid for “GET” rewards

This means that

  • Vaults are Paid 86.95% of all rewards (100/115)
  • App Developers are paid 8.695% of all rewards (10/115)
  • Maintainers are paid 4.348% of all rewards (5/115)
  • And none affect the amount the others get, so you could add more rewards and vaults still get 100% of farming rate

It is misleading to take rewards as stated in the update since it gives the impression that the APP & Core devs are reducing the vault rewards by 15% and the vaults are reducing the developer’s rewards. Whereas the 85% and 86.95% are really meaningless and not indicating the true beauty of the rewards using farming rate and also the way the network is paying for services at the rates it determines.

Now when PtP (Pay the Provider) is introduced and using your misleading figures then we would see vaults getting 75% and the impression that PtP is robbing the vaults of 10% whereas the vaults get no less rewards, just the network is paying more rewards.


I don’t think I understand this … I had assumed that these are pools that are set-aside … from which the various groups are paid, not that they are paid these percentages? @neo - could you clarify this?

edit: I do understand that coin is recycled into the overall pool to keep this whole ball of wax funded.

All coins are recycled when they are paid to the network in exchange for resources.

The rewards are determined purely by the farming rate and the associated proportions.

The farmers get 100% of farming rate for all GETs

The app developers get 10% of farming for the GETs associated with their APP (forgot to mention that APP devs is upto 10%)

The core developers (maintainers) of the safe network will be paid at 5% of farming rate for all GETs

There is only one pool and that is the non-existent coins within the the 32 bits address space.

The %ages I supplied are just for those who cannot change their thinking to the fact that there is not a limited supply of coins and that 10% or 100% is not from a pool of ~4billion but potentially hundreds of billions because of recycling.

There is a scarcity factor that limits any coin creation attempt and is applied whenever a coin is attempted to be created, no matter the type of reward.

So as an example, if the farming rate is 1SC per 100 gets, then:

  • farmers receive 1SC per 100 gets.
  • app devs get 0.1SC (for every 100gets related to their app)
  • core devs get 0.05SC …

So, that’s basically what I thought. Except I don’t understand how gets are related to core dev rewards. I suspect this isn’t fully fleshed out yet as none have addressed this from my earlier question about it above.

edit: so it must be the case (and what you were getting at earlier) that the farming rate for gets isn’t the same thing as the network rate for puts. As even though there is a pool from the non-existent coins to take up slack, these two total amounts (total from farming rate) and (total from network put rate) need to be roughly equal over time so the network doesn’t go bust.

1 Like

Yes that is correct. BUT (see below)

So if one has to use %ages then farmers got 1/1.15 of the rewards == 86.95% of the rewards. But using %ages belittles the reality

The BUT is that app developers will only get rewards if the GET is due to their APP. Since a lot of GETs will simply be on files or supplied APPs with no reward set up then the 10% of FR for app devs is only upto 10% of FR. It will likely be less than 1/4 of all GETs that result in APP dev rewards (i.e. 4% of FR).

Ok there will have to be a procedure to actually pay the core devs and that has not been explored yet. But there will be a wallet or something that collects the 5% core dev rewards. For every single GET 5% of FR will be paid somehow to the core developers (maintainers).

Correct. The PUT rate of payment (collectively) has to cover all future GETs (collectively) which means there will be some files (chunks) that may have 1 million GETs done and others where not even one GET is done. Then there are files (chunks) that are uploaded multiple times and dedup means the network is paid but only one copy is done and then each chunk is paid for multiple times which helps with that wide ranging scope for #gets per chunk.

Yes its

Payments —> pool of unissued coins —> via scarcity factor —> rewards

The scarcity factor is what will ensure that the pool for all practical purposes does not empty. If there are 30% of coins issued then every reward attempt (no matter the type) will only result in a coin being issued 70% of the time (3 out of 10 fail). If there are 60% of coins issued then every reward attempt will result in a coin being issued 40% of the time (6 out of 10 fail).

So for the 30% coins issued and 100 GETs per coin issue attempt (FR) then for 100000 GETs

  • farmers expect 70 coins
  • APP devs get up to 7 coins
  • core devs get 3 or 4 coins depending on “luck” since cannot get 1/2 coin

I guess there isn’t a nice diagram explaining all of this somewhere? If not then maybe I will work on one this week.