MaidSafe Dev Update - January 26, 2017

The end of a long road of implementing disjoint sections is at last here. So swiftly now we move back on track and will soon release test vaults from home. This process has begun and the team are testing and tweaking parameters to specifically exclude weak nodes from the network.

Current priorities

Vaults from home (soon): TEST 12 will be a short-lived test network. The purpose is to test if the network works with vaults from home using the old (existing) client APIs. We do not expect data retention on this network but are very keen to prove the process of network measurement of nodes is concrete, albeit only the beginning. This first step will ensure vaults of at least a spot checked capability will join. This may mean many community vaults will not be able to join, but this is expected behaviour for many nodes at this stage. As we progress with continuous checking and node ageing the number of vaults that can join will increase as the network takes over setting requirements.

New Client APIs: Our intention is that developers won’t have to wait for the full network to start building apps using the new client APIs. They’ll be able to run SAFE Core with mock-routing (a simulated network). The SAFE Client libs are ready as far as the public APIs are concerned and devs can already start integrating and testing against mock-routing and mock-vault (compiled inline if built with --features use-mock-routing as explained in the README there). We’ll release the Node.js SAFE API soon (developers will be able to use it with mock-routing).

SAFE Authenticator & API

Team leader: Krishna

We have implemented the Node.js SAFE API, but we are testing it at the moment by integrating with the demo application. During the implementation, we’ve encountered various inconsistencies in the integration layers, namely Node FFI and its related libraries assuming certain not always obvious underlying behaviours that we/Rust didn’t provide this way as well as unclear and undefined behaviours around ownership. As a result we unexpectedly had to change the API and Rust implementation in a few places. And while that is great, as we believe those changes will benefit future bindings, too, that also means we aren’t able to ship these just yet.

That said, we are confident that the higher level Node.js SAFE API will stay mostly the same as it is in the current version. We’ve completely redesigned that layer to feel more “native” to the environment: no more pointer handling, no need for explicit freeing for the developer. All this is abstracted away in a neat, encapsulated approach where you now only have to interact with objects and promises to achieve your goals.

SAFE Client Libs & Crust

Team leader: Spandan

We published a new version of Crust (0.20.0) with the crust_peer example.

We are now porting crust to Mio v6 which introduces breaking changes and also a certain usage change. This will help us to test crust on mobile platforms in the future once Mio supports them. Carl Lerche (the author of Mio) agreed to contract with us. He’s now being allocated mini projects in Crust. Initially, he’ll start by porting Mio to be mobile compatible and making sure it’s tested on mobile platforms (Android / iOS). In addition we are looking at improving TCP hole punching, and adding uTP integration and UDP/uTP hole punching. In addition, we have STUN and TURN capabilities to enable there as well as bootstrap cache completion and further security measures. This should help move Crust further towards the goal of a robust, efficient and secure P2P networking layer. This part of the codebase is a missing component of the open source software community. It is our belief that almost all decentralised projects will be able to make good use of this library and with Carl’s aid we should be able to promote it in the Rust community as a go-to crate for such projects. Of course this library will also have a “C” wrapper (FFI) for easy inclusion in any language.

For safe_client_libs we are also planning to write a C test suite to test the FFI once we are done with Rust-side FFI tests. The libs are ready as far as the public APIs are concerned and devs can already start integrating and testing against mock-routing and mock-vault (compiled inline if built with --features use-mock-routing as explained previously). This is for developers who want to directly integrate with our Rust code.

Routing & Vault

Team leader: Andreas

The tests on the droplets with disjoint sections and resource proof are looking promising now: data can survive some amount of churn and the new message flow for joining nodes works, as do the launcher and demo app.

However, turning up the resource proof difficulty exposed a few issues and we are now implementing a workaround for a limitation of the underlying Crust library (which was not originally intended to be used as a tool for profiling a node’s bandwidth), and with that workaround we will then do a few more experiments with vaults on droplets vs. vaults from home to find the right difficulty setting: It needs to be hard enough to lock out vaults that are too low on bandwidth to be useful for the network, but easy enough for users with good connections to join.

If everything goes as planned, we will then do another in-house test, with simulated churn, client apps and vaults from home, and when we are happy with that, we will move on to the next test network involving the community.

Meanwhile, a part of the team is looking further ahead, at data chains, consensus algorithms and the remaining parts of node ageing, and working on putting it all together in more detail, as pseudo-code and exact message flows.


Can’t wait to try home vaults again.:grinning:


Great news about crust!!! With the different protocols and with Mio plus help from the head honcho. Also sounds like routing is shaping up nicely! Can’t wait to see data chains and node ageing next :slight_smile:


I’m glad to see that things are continuing to progress well. The expanding collaboration with others—especially the prospect of more community testing and involvement soon—is also welcome news. Thanks again for the continued efforts in this. I’ll be eagerly waiting to see how things unfold.


I think this is very good news, I personally believe that when we heard from developers to be a bit confused when using the API is probably related/caused to/by this aspect.


What does this mean?

Mostly joyfullness - soon, very soon.


Currently you need to take care of creating/fetching data handles before you are able to manage the data entities, then you also need to take care of destroying those handles after you finished managing the data entities. With the new API you won’t need to take care of those extra steps but just focus on reading/writing from/to a data entity and the API layer will take care of managing the handles internally.

Also, IMHO, from a design point of view it makes a lot of sense since handles management can be error prone but now it’ll have one single and well tested design pattern that many developers would otherwise be writing redundantly.


Glad to hear we are getting there! Keep up the good work!

1 Like

Great news on vaults!

Is this waiting for data chains or a more mature version of what is already implemented?


Disjoint Sections just got destroyed!!! (It took months, but….) Time to destroy all the new tasks with certainty.


Welcome Carl Lerche :+1:.

Data retention should be a bit better than last tests isn’t it?? Chunk storage has gone from 3 Vaults to 8. So who knows, I wouldn’t be surprised if it’s already quite good, even without datachains etc. Slow nodes aren’t allowed as Vaults as well.


Indeed. Anything simplifying this is a big step forward. It did seem to need quite a few calls before and left a lot of place for mistakes, so I’m stoked to get a look at this when the new api is finished!

Keep going MAIDSafe!


Thanks Maidsafe team,

Yet another week of hard work done.

Really can’t wait for vaults @ home.

Keep up the good work, while I set crazy goals for myself :stuck_out_tongue:


Are you able to give us any indication what initial minimum requirements may be @maidsafe?


We are looking at approx 1Mb/s upload. Still testing though.


I’m safe :smile: thank you!


1Mb/s puts a lot of Australian connections on the boarder line. ADSL2+ here for most is a max of 1Mb/s with good lines. 1/2 Mb/s would allow most in Australia to have a chance to participate. Not enough people incl me have the so called NBN yet.

BTW Fortunately for me I have 2Mb/s upload. Whew




Florida. Does this seem right?