MaidSafe Dev Update - June 29, 2017

As you might have seen from our GitHub activity this week, it’s been a pretty hectic week with changes merging to master (at last) in multiple modules. The Rate Limiter in Routing has nearly come to its conclusion (one pain point still remains that we’re hoping to address this week too) and related integration from front-end libs are also done and waiting to be approved and merged. Routing certainly is to see its test suite further expanded to test this new feature with rate limits in the mock and network layers. The front-end guys have also been a bit busy with new team members joining in to help (@joshuef) and more bug fixes that have been addressed from the browser side. While we have not yet completed testing with these new features in an internal test network, we’re hoping to bring in a few members from the community into the internal test network itself to help confirm that new features like the Rate Limiter and new APIs are functioning OK. We’re hoping this will speed up the test cycle internally and address any issues observed in the droplet network quickly and redeploy/reissue client binaries.

So to achieve the same, we’re hoping to bring in a few forum users in batches over the next few days to help test these features. We already have some people in mind (longtime community members) and we are going to contact them via the forum and link to a Google Drive folder with client binaries. We’ll also create a public topic to talk about updates / feedback. This should help other forum users know how the test is progressing in a controlled fashion before the flood gates open and everyone is brought into the network. It’s worth also pointing out that this will likely be in stages with quick iterations to test specific parts and might involve some repeated testing of features to prevent regression / checking changes.


There are two new Rust developers that are going to be starting with us in early July. One of them is Andrew Cann (@canndrew) who used to work with us in Crust. More information to follow as we get close to their starting dates :slightly_smiling_face: We’ve also got a couple of candidates at the final interview stage whom we hope can also transition into the team soon.

SAFE Authenticator & API

The integration testing with the actual network has been progressing really well. The issues stated last week are all resolved. The build process for the browser has been even more standardised and tested. Also, devs can integrate the Node.js API from a tarball instead of pointing it to the Git repository. Eventually the Node.js API will be released in the npm registry. The idea of using a tarball is to allow devs to continue using the same API while new developments or fixes can continue in the safe_app_nodejs repository. This approach is being tested at the moment and will be made available for devs when the next testnet is released.

A few issues with network reconnection were identified early this week. All the minor issues that were spotted are addressed. We are looking forward to continue internal testing for the same from tomorrow.

We have been continuing to use the same version of the Beaker Browser released as part of the CEP project and focusing only on the features. With @joshuef joining the team, we are now looking to improve and update the browser.

The apps and the browser are up-to-date with the API changes. @hunterlester has been coping up with the changes and fixes in the API really well and progressing on the tool he is working on.

SAFE Client Libs & Vault

This week we’ve been focused on polishing SAFE Client Libs and fixing the remaining minor bugs and issues. First, we added back and improved the logging API that was previously removed from safe_core in the process of refactoring. Now both safe_app and safe_authenticator provide functions to initialise logs that can be written to the console or into log files, depending on the configuration. This will help us to find & identify issues when we’ll be doing the internal testing.

@adam has been working on integrating the new rate limiting feature from Routing to the SAFE Client Libs. While this feature is not directly related to the client layer, we need to gracefully handle the new error that occurs when a user exceeds a rate limit. Instead of failing a sequence of operations, we just continuously retry the last failed step after a slight delay - when the limit is replenished, the operations will just continue to go through. The pull request is currently being reviewed and we’re waiting for all changes to be integrated into the routing repository, but it’s feature-complete and it’s ready to be merged soon.

We’ve also addressed the issue mentioned in one of the previous updates:

Now it’s possible to request the network configuration from the authenticator without requiring a user to log in or create an account: it’s covered by the new API available in safe_authenticator. Besides that, we’ve removed automatic reconnection to the network on failures and made them more explicit - now it’s a responsibility of the front-end libs to restore the failed connection when they get the notification about it. It allows fine-grained control of network connection status from the front-end side.

Routing & Crust

With these two PRs (#1480 and #1481) and tests (#1482), we will have covered the scenarios that we have discussed so far in our journey of spam prevention for a client-only network. The Rate Limiter throttles the clients dynamically, aiming to never exceed the total throughput allowed per proxy. Besides that, there were other potential attack vectors identified (as requiring implementation now) which a client could perform. Invalid RPCs are now checked for and if detected the connection is immediately dropped and the client (IP) banned from the proxy. Without this a client could have sent RPCs making a node do resource proofing work for it, etc. Malformed messages, invalid priorities and everything else relating to malicious/malformed messages that can harm the network are now thoroughly checked for and if identified the client is disconnected and added to the banned list immediately. We also realised that the channel between Routing and Crust can still be processing messages even after Routing has disconnected from the client because all those were queued before Routing could make that decision. In this case, since Routing had forgotten about the client (there are cases where the proxy will disconnect but not ban - e.g. if the maximum number of clients per proxy has been reached then new clients will be sent a bootstrap deny and disconnected, but not banned of course as they are not identified as malicious), it wouldn’t find it in the peer-manager and would execute the RPC. So we now keep the recently dropped clients around for a period so that we can ignore such messages if they were in queue, otherwise this could also be an attack vector.

Though not used immediately, Crust now also sends the peer-kind in all the new messages it receives to Routing. We have some plans on utilising this in future if certain other things we are discussing get through.


First, lehen, у першую чаргу, prvo, първо, primer, prvi, První, første, eerste, esimene, ensimmäinen, premier, primeiro, erste, πρώτος, első, Fyrsta, an chéad, primo, pirmais, pirmas, прва, ewwel, første, pierwszy, primeiro, primul, первый, први, najprv, Prva, primero, första, перший, yn gyntaf, ערשטער, Firsty McFirstface…
… Come on guys, what’s keeping you? :grinning

(Yes I know I’m sad, no need to point it out)


:man_dancing: And the banker strikes again ! :man_dancing:

ahhh too late


Not this time my banking friend!! :grinning:


looks like “not this time” also applies to testnet this week :roll_eyes:


Great news! Welcome folks.

Nice work. We’re probably up for quite some users with the coming Alpha’s, better safe (no pun intended) than sorry.

Thanks for the very detailed update. Lots of great stuff happening. We’ll experience it soon :thumbsup:.


RPC = remote procedure call?


Sounds like we are finally aproching feature completeness for this test! :slight_smile: Would be more than happy to help you. Just write me a pm.


Good to see the thoroughness of MaidSafe in squashing attack vectors. It’s much easier to deal with now than when some low-level API is found to be insecure in beta and everything on top of it has to be changed.


I am excited to see the ball rolling on this project. This is the one project I not only plan on investing in, but am also very interested in personally using the product.


Good work…coming very close. This week I had a first small successful try with mock…so I love to help you out :slight_smile:


Thank you for the update!
It seems we’re going to be able to continue testnettesting soon :slightly_smiling_face:.
Any progress concerning data chains, or too busy with preparing the next testnet past week?


Bit of both, part 1 agreed and simulated, just to be implemented now in master of routing. Then part 2 and we are good there :wink:


This is a very good idea. There are loads of talented people here who I’m sure would love to be part of moving this thing forward. If the testing can be structured in such a way that those with the right skills can be brought into play in a way that’s genuinely useful to the project then that’s positive all round.


Thanks Maidsafe devs for all the hard work

Yippy soon there will be another testnet :kissing_heart:


Gotta say. Loving MAIDsafe. This feels like a proper development team and loyal community. Only good things can happen. Good luck to all.


You guys are the masters of suspense but I’m learning to read yah, I think :-/

Thanks for keeping going on this all you maidsafe devs. Great to see the team expanding and welcome (back) @canndrew

I miss your posts @Viv @Krishna_Kumar @anon86652309 @qi_ma so hope you’re still alive and kicking code :wink:


Mayyyyyyybe… :wink:

All good tho lol hopefully a bit more calm after this iteration. We can hope eh :innocent:


so hope you’re still alive

Watch out with such statements, because before you know it, you have situations like this :wink::


MAID: better Safe than Sorry
Polpoirene: I think you accidentally came up with their future marketing tagline!