Here are some of the main things to highlight this week:
- Tomorrow at 15:30 UTC, we are hosting a Zoom session about the SAFE Browser with @bochaco and @joshuef. See this post for more details on how to join.
- You can read up on a few thoughts by some of the new starts in the Chennai, India MaidSafe office in this post.
- We released SAFE Browser v0.11.1 earlier this afternoon.
- In SAFE Client Libs, we made some good progress with the RDF storage: the experimental proof of concept we have been working on allows us to store RDF triples natively in Mutable Data.
- In Crust, there is some work in progress for improvements to bootstrap cache. Kudos to Luka who’s spending his time implementing the limits on bootstrap cache.
This week, we’d like to remind folk about the video call that @bochaco and @joshuef are taking part in tomorrow. If you’ve got any questions on the SAFE Browser, or just want to chat about anything related to the Front End of SAFE Development, please do tune in. You can check out details of how to join in this post - but to summarise, it starts at 15:30 (UTC) and will be on Zoom (Meeting ID: 735 305 884). If you have any questions on the Browser, Patter or anything else - now’s your chance!
Work has continued this week on planning the options for DevCon in order to nail down some of the current variables. General planning has also continued - it’s not particularly glamorous work but these are the foundations for much of the work that will support our efforts in 2019. Since @cgray’s start at the end of last year, it’s also involved developing a more comprehensive plan for content generation and distribution. That translates into many discussions and much analysis - and progress has definitely been made.
Finally, a couple of things to let you know about. You can read up on a few thoughts by some of the new starts in the Chennai, India MaidSafe office in this post. And we’ve just scheduled an interview for the next episode of @fergish’s SAFE Crossroads podcast with some special guests. It’ll be recorded at some point during the next fortnight (no pressure for the interviewees…).
Website enhancements continue apace, with the v1.1 of safenetwork.tech deep in the design phase. Plenty of pleasing refinements and improvement; both in terms of usability, look and feel, and under the hood too.
As part of on-going User Experience design work for the SAFE Network, we are also regularising our approach to usability testing, with testing conducted on a weekly basis.
It’s incredibly insightful (and sometimes painful!) to watch people using your software, and not only that, it helps the whole team empathise with the people that will be using the network when it’s out in the wild.
Conducting user tests, and learning from them, is something that itself takes practice, but it’s already proving a real boon: paving the way for some features we’re working on. More on that in the coming weeks.
SAFE API & Apps
This week we’ve been tidying up some last issues, and we’re pleased to have released v0.11.1 earlier this afternoon. Alongside this, we’re working on some other more basic (but necessary improvements) to the browser code base, including (as ever) more tests and fixing the code linter (so we can automatically verify code quality in the near future).
We’ve also gotten further with some proof of concept work for the PNS (Public Name System) RFC, setting up the RDF toolchain to easily generate FileMaps and getting into setting up how resolution of these new data types might look. A healthy side effect of that is the proof of concept CLI application is becoming more useful (though still nowhere near polished), it looks like a starting point for some easier methods of uploading to the network (without needing to go through the WHM). Though we’re not focussing on that explicitly just now (and node-js isn’t necessarily ideal for a CLI), it’s not a bad thing to have waiting in the wings either.
The C# example applications and getting started guides are under final internal review. The tutorials will help developers to understand the authentication process and to develop desktop and mobile apps for the Network using the MaidSafe.SafeApp NuGet package.
We have started planning the next sprint for the safe_app_java project. This will target additional test cases, code coverage and tracking issues. For those who haven’t tried the native Android APIs or the Java APIs on desktop, give it a go. Hugs and bugs are welcome
SAFE Client Libs
We have made some good progress with the RDF storage: the experimental proof of concept we have been working on allows us to store RDF triples natively in Mutable Data. This marks a milestone in having native RDF support in SAFE Client Libs (and, consequently, in all languages and platforms we support). When we add SPARQL to the stack, it will bring another standard way of working with the SAFE Network and consuming data. More importantly, it should also bridge the gap between the SAFE and Solid communities. It also allows us to connect with many other projects related to Linked Data and Semantic Web. In the coming weeks, we’ll be talking more about options and opportunities the RDF stack opens up for developers.
In parallel, we have been continuing to work on the high-level RFC that outlines the benefits & challenges of having RDF implemented in Client Libs. As we have considered many options, including supporting RDF at the level of Vaults, we conducted extensive research on the topic. This RFC should conclude that research and move the idea to the phase of discussion and execution. When we feel that the RFC is in a ready state, we’ll put it out for community discussion. Expect more news in the following Dev Update.
Our Routing team continues with a double focus on PARSEC and design work for Fleming.
Our design sub-team focused on the key scalability difficulties that we need to address.
First, we clarified our options to handle limitations on the number of nodes any node can directly communicate with. One option would be to use an untrusted traversal layer which can provide the abstraction of connecting to a large number of nodes to upper layers. Such a layer could be implemented using the Disjoint sections connection pattern or raw Kademlia. Another option considers relaxing the existing limitations from the communication layer (Crust). We would modify this layer to no longer necessarily expose active connections. This would enable concurrent communication with many more nodes than what our permanent connections currently allows.
Also, we better characterised how well our PARSEC implementation scales with section size. Section size is a significant factor in our designs and we need further improvements in our implementation to get the design space we need.
Additionally, we built models and simulations to characterise more precisely Sybil resistance of the Network. This is still a work in progress, but the initial results re-confirmed some of our assumptions and also justify our continuous work on PARSEC performance.
As mentioned last week, we have been thinking of how best to share more information about the progress we’re making in the Fleming design. We also hope this will allow us to engage in more discussions with you, the community, on some of these design elements. We will post an article on the forum by the next dev update which outlines our motivations for these Fleming discussions in the first place, so the context is clear before we start diving deeper in subsequent posts. We hope you’ll love this content and we are looking forward to discussing these ideas with you all
Our PARSEC sub-team continues to focus on performance.
We developed and integrated the simplifying changes identified last week. With it, we again improved performance significantly but also reduced our memory footprint. With simpler code, we also opened up promising new avenues for performance gains.
To drive our performance focus, we implemented new characterisation tests for performance with different section sizes. This will provide a good basis for our next performance drive focusing more on scalability.
In parallel, we are spending time to improve our tests as needed. Indeed, recent changes have revealed a weakness in the way our test framework deals with dynamic membership that results in occasional failures during soak testing. Good tests that do not fail spuriously are key to maintaining our velocity while making the big changes needed for PARSEC performance.
This week we were planning to test Crust for any regressions via Routing end-to-end tests, but we had some trouble with building Vault with the latest changes. We fixed those issues. After merging this PR Crust will expect the upper libs to also use safe-crypto which is good as that’s what we want all our crypto dependant crates to be using. We’ll make necessary changes in Routing and Vault to allow integration of Crust with this changeset and we’ll carry out further testing, before passing things over to QA.
We finally removed the
Uid trait in favor of public keys for peer IDs. That simplified Crust a lot as peer ID is used all over the place, so a lot of code had to be generic over
Uid type. There’s also some work in progress for improvements to bootstrap cache. Kudos to Luka who’s spending his time implementing the limits on bootstrap cache. Up till now, the bootstrap cache had a fairly simple implementation - it would cache as many peers as it had connected to. This is not really practical, so we’ll keep a limited amount of the most active cached contacts instead.
Currently, Crust has a very strict connection manager: it keeps alive all connections the upper layer tells it to. The problem with this approach is that in peer-to-peer networks we expect thousands of connections and our Operating Systems simply won’t allow us to have so many open connections in parallel, well, at least by default In addition, with PARSEC now in play the upper layers will say when a node is offline. CRUST was designed to let us know this at network speed, but it causes consensus imbalance. Using upper layers to vote via PARSEC for consensus is more steady and removes the requirement of CRUST to “police” nodes in this way.
So we are looking for a way to maintain some connections where it is more efficient, disconnect when not so active, while being able to re-establish connections as fast as possible when required. In some literature this is called Link Management, we call it Dynamic Connections. So we started designing the solutions for the problem as it is extremely important for future releases - enabling the Network to cope with a huge volume of connections as it scales