Safe Network Dev Update - September 10, 2020

:100: It was terrific to see that happening. Gives us all faith in open source and it’s also less maidsafe’s network this way. The more the merrier


Is it possible to get a really basic explanation of why this new async paradigm is important? There’s a lot of talk about switching over to it and it seems to be taking a lot of time and work and focus over the last several updates, but I don’t recall having seen a simple normal-person explanation about why this is worth doing. What benefit does it bring to the core codebase and developers? How will it affect app developers? What benefit does it bring to end users?

Really great to see this.

Some related info about how versions, protocol upgrades and signalling has evolved over time in bitcoin. We have 10 years of history and experience to guide us with this feature.

BIP-0034 - Block v2, Height in Coinbase

Clarify and exercise the mechanism whereby the bitcoin network collectively consents to upgrade transaction or block binary structures, rules and behaviors.

BIP-0009 - Version bits with timeout and delay

a proposed change to the semantics of the ‘version’ field in Bitcoin blocks, allowing multiple backward-compatible changes (further called “soft forks”) to be deployed in parallel.

BIP-0008 - Version bits with lock-in by height

an alternative to BIP9 that corrects for a number of perceived mistakes. Block heights are used for start and timeout rather than POSIX timestamps. It additionally introduces an additional activation parameter to guarantee activation of backward-compatible changes

BIP-0068 - Relative lock-time using consensus-enforced sequence numbers

The change described by this BIP repurposes the sequence number [within bitcoin transactions] for new use cases without breaking existing functionality. It also leaves room for future expansion and other use cases.

Interesting to see an unused field finding an alternative use in the future. Sometimes a bit of slack is handy to build in, and sometimes it’s hard to know which features will be useful and which won’t be. In this case the unused feature turned out to be a handy substitute for a new feature.

BIP-0135 - Generalized version bits voting

a generalized signaling scheme which allows each signaling bit to have its own configurable threshold, window size (number of blocks over which it is tallied) and a configurable lock-in period.

BIP-0320 - nVersion bits for general purpose use

reserves 16 bits of the block header nVersion field for general purpose use and removes their meaning for the purpose of version bits soft-fork signalling.


Simplicity of code is a driver. So where we have recursive code or callback type things then async cleans that up. Making libs async makes code more readable. A bit like javascript callback hell is made much cleaner with promises.

Also with fs/io read write you get re-entrant functions/methods. so you can have stuff like

asnc fn something() {;

So you end up with synchronous looking code blocks but actually, there is a load of waits, so re-entering the method when each await returns like above means the code looks much nicer than say a load of loops to wait for returns etc. or even worse, callbacks etc.

While the await “waits” the processor moves on and executes whatever other code it can, so think threads are to share work and async/futures are to share tasks in code]


Ok cool, so with the async feature

  • core devs can understand, maintain and extend the core network codebase and features more quickly and reliably
  • app devs will have a simpler time reading the api docs and writing code for their safe network apps, but will need to have at least some understanding about the aysnc way things happen on the network
  • end users will see features arrive sooner and more reliably with less bugs and breakage of parts unrelated to that feature, and the network will be faster overall

Is that about the right degree of impact for this change? Maybe I’m trying to over-analyse or simplify the impact here; just I felt like when my friends or non-tech people read the update I want them to understand the reason for async happening.


Yes, also the code should be more efficient. By that I mean rather than us coding loops etc. async takes care of that work, but in the language, so implemented very efficiently.

An example that’s good to show as an addition is this. In routing we wanted to try to send a message up to 3 times over 30seconds (say). So we had to change qp2p to take a token (u64) (should have been u8, anyway. qp2p took that token and routing set it at zero, if the message failed then we got it back and waited 10secs then sent it again with a token of 1 … and so on. So we had to break an API , put something in a network lib that should not have been there and so on. now all we need to do is

async fn routing_send_msg(msg: &[u8]) {
for _ in 0..3 {
if qp2p_send(msg).await.is_ok() { return; }

Pseudo code obviously, but you see the point. So here it allows us to do more without passing stuff around API’s as well.

But your points are all correct. A client could await 100000 times for chunks and so on and it will all be Ok.

[EDIT Also with the above problem that’s now simple with async, we also had to create a, and a fake clock, none of which are now required, so we get better performance, much less code and a more efficient and understandable codebase]


Not sure how relevant it is to this work, particularly given that it is in Rust, but I’ll add that async is generally lower overhead and less error prone than multi-threading.


For me it’s this about 1000x. Having started recently diving about the core libs (vaults/scl), and having a hard time there. Doing the shift to async drastically simplifies things. I could finally see what was going on. And when you can see what’s going on, it’s much easier to reason about things and so make the jump to cutting big complex chunks of code (see recent scl refactor chopping out 18k lines, or the qp2p refactor itself).

It also simplifies a lot of multi-threading (maybe it could have been done otherwise, but I could not see how; but then I am still relatively new to rust). We had a lot of specific structures and indirection to manage things like qp2p and an event loop driving the whole of SCL which caused a lot of complexity. With qp2p going async, and SCL too, we were able to remove that and simplify the core structs massively. It is so much cleaner now. It should be much easier for folk to come in and look at the code and see what’s going on, suggest improvements etc.

IMO it’s a very healthy thing for us to be doing and has already proved itself worthwhile in terms of enabling us to move forwards faster.


Just want to add that using async in Rust is very nice indeed, and makes doing multi threaded code easy compared to the rocket science it is without it. The result is much less code, far fewer bugs, easier debugging and maintenance, and greater efficiency.

I’ve not done much yet, but the concurrency needed for my logterm-dash app was a breeze because of this. Getting the concurrent threads coded and working was literally fifteen minutes, instead of probably an hour reading and who knows how long writing and fixing the code. I was literally shocked at how easy it was.

Using Rust is in general an incredible experience, because the compiler won’t let you write unsafe (buggy) code. So here I am, with a lifetime’s experience of C, C++ in particular, learning how to write bullet proof code because I’m forced to think it through at a whole new level. Having debugged compiled code at the assembly language level I thought I understood this, and I did in part - what gets put on the stack and the heap, and how each variable is stored and accessed in memory. But I’d never thought about ‘borrowing’ in this way, even though it is fundamental to writing solid multi threaded code. I suspect that I developed a way of coding that avoided these issues to some extent, but would then end up having to spend literally days, sometimes weeks tracking down and fixing tricky bugs.

And with a compiler that can suggest cut and paste fixes for my basic errors, life for a newcomer is made much easier. I still struggle with borrowing, but each time I am learning a bit more how to go directly to the solution, and I’m being taught why my first attempt was buggy.

Using Rust a really nice experience because I love to learn, and it is teaching an old dog new tricks!


Just wow!!


I’d like to take a look at Rust one day. It does sound like it has moved the system oriented programming language space on to a new level.


It’s a big claim, but in many ways it’s accurate. Now it needs to async automatically and let us write straight logic. If it could then handle interior/exterior mutability automatically we would be a good place (leave the mutkeyword, but say I will handle that for you now you told me it can mutate) :slight_smile:


Just an FYI, the Safe name brand poll is now closed. It was a close vote at the end. But “Safe Network” with a space has pulled through as the winner. Thanks to all who participated!



in greek only the summary through google translate


Is there a doc outlining why this was formulated in this particular way, eg why 2 reserved bytes and not 1 or 3 or 10? Not that I am personally interested in critique, I’m more interested in it for historical purposes down the track, documenting thoughts, intentions (or absence of), historical context, influence and precedence can be a handy resource in five or ten years when those spare bits might be subject of debate. Even if it’s just something informal it can be good to have something publicly available. Much easier to do it when the decision process is fresh than six months later.


That was an arbitary number really.

Version is a simple version, so any breaking change increments this.
Length - up to 4Gb files/chunks or can be set at 0 for streaming (radio etc.)
Flag, currently only uses 1 bit to indicate qp2p message or a consumer message.


Sounds a lot like what HTTP proto concluded was a good pattern :stuck_out_tongue: , HTTP headers vs body and order of sending so(headers hit first then body buffers). Not necessarily the specific lengths but the concept anyways.

1 Like

Anyone who has every worked with asynchronous code and callbacks knows that nested callback hell is the worst. Async/Await for me as starting developer made everything so much easier. For that same reason I think the core developers will have much better time with less errors and straightforward readable promise logic.


Why is parsec being removed from routing? I thought the development of parsec was the jam…

Explanations given here:


Thank you for the heavy work team MaidSafe! Safe lives in our imagination, let’s put it together in the imagination of the whole world!

I add the translation into Bulgarian in the first post :dragon: