Recent updates have been relatively brief and focussed as the team worked through the ongoing development issues. In this update we intend to provide a little more detail so this one might be a little longer than you’ve been accustomed to.
Firstly, as you all know we have been (trying) to build the team. The task of finding network engineers continues to be a challenge, but in this last week we are delighted to welcome @hunterlester to the team. Hunter has proven himself to be a skilled, highly motivated and passionate about the network and we think he will make a great addition. Hunter is yet another developer we have been fortunate enough to pick from our excellent community and he’ll be joining @Krishna_Kumar’s team. He is also currently plotting reviving the San Francisco meet up so likely that you guys will be hearing a little more from him in the coming weeks and months.
For quite a few weeks now majority of the Routing team have been looking into Data Chains integration and other parts that need to get added to address some of the requirements to reach our next milestone of enabling Vaults from home with Data Republish and network restarts. While the dev updates themselves have been quite brief, it’s been quite the opposite within the team with lots of proposals and discussions related to tackling these objectives. Two concepts stood out amongst the rest and we’re looking to summarise and publish both to the community to showcase the work that has been ongoing in this part of the team, and also give some details of the thought process and library components coming together to achieve these objectives. We’re hoping to have these out next week, but disclaimers first to say these have by no means concluded. However, they’ve matured a bit over the past weeks and we feel discussions on the dev forum would be of benefit and help move the process along. As you’d see from the routing section, the guys are working on simulations that can be used to test these concepts, confirm their reliability, while considering various other factors that can be expected in a live network.
From the side of MutableData we’re looking to start some internal network testing with clients early next week. The front end team with the guys from Safe Core have got the Vault implementation up to speed and with a few pending PRs the test suite in the Vault library is looking good . Once we can confirm the same with some mini network tests via the safe core modules, we can start testing the network exhaustively for the various new features brought in via MutableData before moving them forward to the next test network with the Authenticator paradigm.
It has also been great to see the community helping each other out working on building mock routing. The MaidSafe team would love to have the time to help out but as you know we must prioritise the network and APIs. While we certainly wouldn’t discourage the community from trying to work build mock from scratch, it may make sense to use the [binaries] (https://github.com/maidsafe/safe_client_libs/blob/dev/safe_app/README.md) provided. Not only will this save frustration and enable you to focus work on your app, it would also help us receive some initial feedback on the APIs. For the die hards we will publish instructions in a future dev update. We also wonder if it would make sense to keep this type of work to the dev forum, as it would then help keep relevant discussions in the same place as any new potential dev to the system might be looking for help in the corresponding forums.
SAFE Authenticator & API
@gabriel is refactoring few API function names to keep the API consistent and friendly. The changes are also implemented in the DOM API side by side. We are reviewing the documentation and examples in safe_app_nodejs. We will be updating the example applications based on the safe_app_nodejs API changes.
SAFE Client Libs & Crust
Vault tests have now been updated to work with fake clock just like routing tests with mock-crust, so that there is precise and deterministic control of time manipulations to make sure containers and cache artifacts are indeed what we expect. This has lead to discovering a few further bugs which are being worked on and squashed.
Test cases have been expanded in order to cover more scenarios that we find as we think about additional permutations. After putting in these new test cases and fixing the new bugs discovered with the help of fake clock we will get down to reviewing this mammoth PR again and if nothing else stands we’ll merge it.
The Crust library has been updated and now the upper libraries can templatise it for a given
Uid of choice, which with the help of traits can be expanded in the future to integrate secure serialisation in Crust. Given the templated nature of current Crust module this should be relatively easy to integrate.
Routing & Vault
The main part of unifying node/peer names/IDs is done. This concludes a significant portion of a cleanup in Routing Peer Manager and node modules. Quite a few containers in PeerManager have been made redundant as a result of this change while functionality remains the same. Based on this we will be able to do a few further simplifications, which will come later as smaller, individual changes but overall it helps structure up and coming feature work in a more concise manner.
The data chain design discussions have also been progressing along well with multiple hangouts discussing the various portions of multiple concepts currently being considered to integrate these features. We are writing more simulations to evaluate different proposed variants in terms of security and stability.
To give you some perspective, the most contentious and central questions are about how to get from what each node observes about the network to a cryptographically signed and independently verifiable chain recording the network’s history. For example, one node A might see a node C disconnect, and then a node D, while another node, B, might see node D disconnect first, and then C.
- Is it relevant whether C or D went offline first, or can the chain leave that open and just state that both left?
- Do the nodes arrive at a strict, linear history of the network via a consensus algorithm like PBFT involving several rounds of message exchanges before committing that history to their chain, or do they just inform each other about what they observe, and record those observations as the history?
- The latter case is easier at first, but produces a more complex kind of chain which requires an elaborate set of rules to define how to read this chain as the network’s history.
- Are the blocks in the chain “change events” like “remove node D” or “add node E”, or are they “states”, like “nodes A, B, C, E are in our section”?
- Can we use a simple majority as a quorum, or do we need to use something like ⅔, as required by most consensus algorithms? keeping in mind not all consensus algorithms deal with a fully decentralised locally sourced groups with members fluctuating often and reliability that cannot be taken for granted.
Some of these questions don’t necessarily have a right or wrong answer, and through thorough discussions and extensive simulations we are trying to figure out which approach will best fit our use case. We’re hoping to have these and lot more such design discussions summarised and published to the dev forum hopefully soon