SAFE Network Dev Update - April 4, 2019

Summary

Here are some of the main things to highlight this week:

Marketing

With @dugcampbell away in Berlin talking all things SAFE at the Landing Festival, and two of the team away on their holidays it’s been a slightly quieter week in Marketing. So holding the fort has been @SarahPentland and @cgray, with them working on plans for content over the month of April. We’ve already got a few volunteers lined up to join us on the sofa for the SAFE Buzz video series, but if there is someone in particular you’re keen to hear from or you want to understand more about one of the teams, just add them to the thread below and we’ll see what we can do. As we’re now (already?!) in April, we have the usual regular monthly update on YouTube with a round-up of things that happened in March. For those who prefer the written word, we’ve got an overview on Medium so head over and check that out. As always, feedback welcome :smile:. We pushed out another Tweetstorm, this time on a simplified, non-technical view of the SAFE Network - what do you think?

User Experience

Work continues at a fine clip on a major update to the website. It’s designed as an evolutionary enhancement to the existing site, but it’ll also contain some new content which we think the community will be particularly delighted with.

The site has always been a key tool for the Marketing team to explain the Network and describe its vision and impact. That’ll become even more crucial in the coming months, as milestones converge, and we’re happy to be doing our bit to making it all happen. Exciting times!

SAFE API & Apps

We’ve been working on enhancing the documentation on the browser and we finally merged that into its master repo. The README now contains a lot more information not only for users trying to build it and/or use it, but also for developers interested in contributing to the project. There are details now about the structure of the codebase, its internal design, and how the development tasks are organised in the project board.

This week has also seen @manav_chhuby digging into the internal tab logic of the browser. This is one of the oldest parts of the codebase, and is responsible for no small number of bugs due to how it was initially set up. Manav has been working hard to implement some increased sanity here, as well as providing even more tests.

@hunter has also polished off a PR that achieves a couple of nice wee things: 1) It blocks all HTTP/S requests by default (no longer triggering your clearnet browser). It still shows a notification for any navigation events that occur, while throwing an error for any fetch events triggered via the DOM. And 2) It deprecates the ugly notification prompt we all know and firmly have mixed feelings about. And instead integrates ant designs notification component, which is decidedly less blocking and somewhat easier on the eyes too.

On the auth CLI, we have a PR in progress and almost ready for review, which would allow users to store their SAFE account credentials in a config file so they don’t need to be entered by the user every time. We are also planning to work on adding more unit tests to the project.

SAFE Client Libs

The init() has been called on Client Libs! :fire: This week, we have started to define tasks for our next milestones. We’re starting to think about the implementation of the XOR URLs RFC which will make it possible to use SAFE URLs in RDF resources and link data stored on the network together. And speaking of RDF: it will be picking up pace again, as we’re continuing to work on the proof-of-concepts and demos, fixing the remaining blockers and bugs. @lionel.faber and @yogesh will also be joining this effort to quicken the development progress. Update of SAFE Bindgen to use a modern parser library is nearing completion and we take it as a priority because it’s part of another larger milestone of porting Client Libs to use the latest stable version of the Rust compiler.

Reshaping of our Rust APIs to be more developer friendly is another thing we’re aiming to implement. We’re starting with simpler steps, exposing more internal helper functions that simplify interaction with the network. We’re also making sure all FFI APIs are available in Rust. In the longer term, when Rust has more language features, we’ll be streamlining our interfaces further. For example, one of the things we’re looking forward to is the inclusion of async/await syntax in Rust which should make it much more straightforward to write asynchronous code, without having to learn difficult concepts like futures. Considering the asynchronous nature of all network operations, it will be an obvious improvement to our API.

On the documentation front, we are nearing completion as a major part of the work has been dealt with by @marcin. @lionel.faber has been wrapping his head around the code this week to better understand how the gears move in the Client Libs implementation. This will be helpful for the upcoming refactoring milestone.

Our involvement with the community continues as @mav has been very patiently responding to review comments on his code submission. We also made edits to the Guide to contributing page, adding new sections and more links to make it even more accessible for people without much development experience. We also added a note about this guide which appears when opening a new issue or PR. In addition, we migrated a significant chunk of our old Jira issues to GitHub, making them open for the community. Should anyone want to tackle them, a good chunk have been marked good first issue or help wanted.

Routing

This week we made some interesting progress on the Fleming front, but let’s start with what is happening in PARSEC.

The PARSEC subteam worked on making the algorithm still more resilient to malice. We implemented a fix to a bug present in the original whitepaper, which made it possible to create split consensus by forking the gossip graph. There was a hole in the proofs of correctness, which we fixed in the to-be-released 2.0 paper and backported the fix to current PARSEC. There is also ongoing effort in improving the handling of accusations cast by nodes and in creating tests for the malice handling code.

On the Fleming side, we went full steam ahead in detailing the changes required for Node Ageing, and even started ticking off some initial implementation tasks.

In the implementation, we started moving the responsibilities of the Routing Table to the Chain. With the introduction of the Chain, which holds the history of membership changes in the section, some functionality was duplicated between it and the Routing Table. This move effectively deduplicates some code, making it easier to maintain and develop it.

Another implementation task is focused on changing the message relay mechanism. The current mechanism of passing a message between two distant sections relies on a chain of relay nodes, one per section, across the network. This has the drawback of a single node being able to break the chain, so the mechanism additionally involved waiting for an acknowledgement of reception of the message, and if it couldn’t be received, re-sending the message using a different route (a different chain of relay nodes), with up to 8 routes (the sender gives up if the 8th route fails).

The new mechanism gets rid of the acknowledgements and timeouts - to make them unnecessary, instead of a single relay node per section, we use N/3 of them (N being the size of the section). Since we always expect every section to contain more than 2N/3 honest nodes, this means that we expect at least a single honest relaying node in every intermediate section (which we call a hop). All of the relaying nodes then pass the message to all of the relaying nodes in the next hop, making it certain that the message will make it to the recipient, as long as our security assumption (more than 2N/3 honest nodes per section) holds.

On the Node Ageing design front, we are analysing the flows needed to satisfy our requirements. We prepared some documentation in the form of flowcharts that will guide our implementation, which you can check out here (bear in mind that we are still working on making them more readable, explaining the context, etc.). Reviews, comments and contributions are very welcome! (If you wish to contribute, the source might be useful.) We also created an implementation of the flows in actual code, in order to be able to test their correctness before touching the Routing codebase.

Crust

This week we’ve been investing a lot of time into quinn – the Rust QUIC protocol library. While quinn still won’t work as we’d like it to, we keep analysing the failures we observe under the high load. We are in constant communication with the quinn team and we filed a number of issues and proposals. To highlight a few:

All of our bug reports get quick responses and most of them have been fixed and closed already, so we’ll keep testing the changes and running our test scenarios. Overall, this work should also bring better QUIC support to Rust and benefit the community. :slightly_smiling_face:

79 Likes

First!

20 char…

13 Likes

thanks for the update, its scary to think that the node aging and secure message relay might take even few months to be completed and implemented, and there might be more stuff to implement before any testnet will start. Feels like a never ending story with the new tasks poping up before alfa 3. Just sayin.

13 Likes

Troisième! Now to read …

5 Likes

Still so far ahead of other tech, though sometimes it seems as others are making some good strides, often that can be learned from and adapted to SAFE. Blockchain inspired Data Chains to help make the network more resilient, support network restarts, and more, DAG, Gossip, and Algorand helped with PARSEC which then lead to POA Network having something that the brilliant Andreas Fackler was able to pass on for the full asynchrony of PARSEC. Huge steps that are leaps ahead.

To me what’s most important is that the environment encourage and foster contribution and development in the form of stability, documentation, and resources. That way we have a chance to develop and know what we’re playing with and how it can change the game. That said, I really hope we’re up and going by sometime late next year. That may in actuality be optimistic but I’m hoping so.

Anyways, great update! I’d love to hear more about RDF in client libs from @nbaksalyar and/or @marcin. Also really exited for the PNS PoC that I think @joshuef is working on but of course I’m sure the credit is deserved all around to @hunterlester @bochaco and elsewhere. The three amigos, I think of them as :smile:

Keep killing it @maidsafe!

33 Likes

Tweetstorms are a great thing and completely free, so I hope you keep it up! Some suggestions I have on that is use more hashtags and @ some crypto influencers. Try to get it to be more of a conversation then just a billboard. crypto twitter is quite a beast and if you can get it working for you then you have a lot of free horsepower. These tweets feel like you are the professor at the head of the lecture hall and maybe we can ask you questions, but its like you are the center of this convo and other people might come under you wing. I suggest instead to go sit down beside someone and talk to them. You don’t have to be the first one to talk for a profitable discourse.

Unfortunately I can’t retweet for you or anything from my main crypto twitter account cause you blocked me. I think it was like 3 years ago when I was contending that you should maybe not sell more MAID ahead of the schedule that was laid out to the first investors…

4 Likes

I understand why there are no time scales, but it would be good to see what it outstanding, allowing us to visualise progress. It does feel like things sounded close before Christmas and now I have no idea how close we are - even the road to Flemming topics feel like a reality check on this front, showing how much is still to do.

I’d say this community is one of the most patient in the space, but it would be great to support that good faith with a clearer view of progress towards the Flemming goal.

Keep up the great work though! We are all routing (geddit?) for you!

31 Likes

@Traktion I think you’ll be pleased when the website update drops

20 Likes

@JimCollinson when can we expect the website update to be released?

2 Likes

Woah, hold on, that sounds like a timescale request!? :grinning:

24 Likes

I know Maidsafe likes having secrets, so not sure you can give me an answer so maybe a hint like: probably within a week or in April or more like few months…

2 Likes

Come on, it was an open goal!

Weeks rather than months.

18 Likes

Sounds interesting! I guess this is the reason why the roadmap items on the current website which are all done except SOLID according to @dirvine aren’t updated? :slight_smile:

This said it would be really awesome to get an updated list of items left to be done until Fleming. I recall it was mentioned in one of the previous updates.

6 Likes

Just around the corner… Never guessed the corner is sooooo big. Anyway still following.

3 Likes

lol well at least you can look into the past and get a sense of time, and be like this was achieved on X day and it seems to have taken them about that long, and then extrapolate the current pace? Really these guys are like artists. They won’t hurry up and just get it done, because that might make the whole effort worthless.

I realize a lot (most?) of us are thinking about how much opportunity cost is there to invest in MAID when like 100 other things will churn out a finial product before then. This increases risk for sure… but if they eliminate all the risk then there is no more fun trading and it becomes like a bluechip stock. I say embrace this game and play it knowing that everyone is playing in the same arena and has the same info as you, so its not like you are handicapped in this game by not knowing like everyone else.

1 Like

I’m thinking the networks full release date may be influenced more by external habbenings than actual readiness.

2 Likes

Just remember there is a Hackathon in September. For there to be a Hackathon, there has to be something to hack. Stay calm

4 Likes

Thanks so much to the team for all of your hard work!

4 Likes

Some questions from the new ageing flowcharts.

I’m not clear on how work units are incremented.

Work units are incremented for all nodes in the section every time a timeout reaches consenus.

What is ‘a timeout’? Is it resource proof not being completed in time (as per the quote below)? So all nodes age +1 every time they all agree that a node has not passed resource proof in the allocated time?

At any time during this [resource proof] process, they may timeout, in which case I will decided to reject them and vote for ParsecPurgeCandidate.

Would it be clearer to use the term ‘ParsecPurgeCandidate’ rather than ‘a timeout’?

Right now in my mind I equate ‘a work unit’ with ‘a timeout’ and that doesn’t quite fit conceptually.


I will send them a ResourceProof RPC. This gives them the “problem to solve”. As they solve it, they will send me ResourceProofResponses.

Any info on what problem(s) this is likely to be?


@mav has been very patiently responding to review comments

Should anyone want to tackle them, a good chunk have been marked good first issue or help wanted.

The team is great with the reviews, teaching me high quality rust practices so I’m learning at the same time as contributing. Patience is easy when it brings big benefits to the repo, the maintainers and me. I’d encourage people to have a crack at these issues. It’s a very pleasant experience working with the maidsafe team.


instead of a single relay node per section, we use N/3 of them (N being the size of the section).

At the risk of stating the obvious, does this mean a much larger increase in bandwidth for messaging in order to get much higher reliability and speed?

Does this mean each node after the first hop will receive N/3 copies of the same message? The first hop has one sender so each recipient gets the message once. But the second hop has N/3 senders so each recipient gets the message N/3 times. Do I understand this correctly?

Does a message contain the chunk data? If I PUT a chunk to the network, does every hop message contain the full 1MB data of that chunk? Seems like obviously yes… but could messaging be used to only pass metadata, which sets up a shorter proxy chain for the actual chunk data transfer? eg metadata message
PUT <my_chunk_xor_name>
passes through the full message chain of maybe 10 hops, and the response from the destination section is
Upload via sectionX >> sectionY >> sectionZ
to actually store the data which is now only 3 hops…?

hops * (N/3)^2 * 1MB (approx) is a lot of bandwidth for every chunk.

Overall I really like removing ACKs and the concepts, just I’m not sure right now about how to measure cost vs benefit of this change.

22 Likes

Very helpful information format for those considering getting involded I’d think.

8 Likes