Here are some of the main things to highlight this week:
- The team are at Web3 Summit next week in Berlin
- We’ve now automated the release process for safe-cli, safe-authenticator-cli and safe_vault so now when we merge a version change PR, it will trigger a release not only to our GitHub release page, but for safe_vault it will also publish to crates.io
- The SAFE Network app is moving at pace, with the integration of install/download functionalities and onboarding process well under way
- The bug that was stopping the connection between SAFE Client Libs and Vaults has been dealt with
- The QA team have switched vaults over to build with musl libc rather than glibc
- We now have the first part of the end-to-end solution in SMD
- And a quick recap on where we are in the project plan
It’s been all hands on deck this week, with some really, really exciting news to follow very soon… The first release, which we’re calling Phase I (see below), is yards from the finish line - check out the GitHub board and you’ll see tick, tick, tick in the done column! And work on Phase II has already started. Yep, we’re smashing it. You can check out the teams updates in more detail below, but the work by everyone has been astounding, I’m sure you will agree. Each phase is a monumental step toward Fleming, and we’re working at breakneck speed to get this into your hands. In the background, the marketing team have been head down, developing some exciting plans on how we best share this with you.
We’ve also been prepping for the Web3 Summit. While @dugcampbell will be on stage speaking about decentralised storage (Tue 20th / 5.30pm - don’t be late!) @joshuef and @cgray will be listening, learning and engaging in some good old debates about the decentralised future. We’re hoping it’s every bit as inspiring as it was last year, so it’s a good time to check out @dugcampbell’s round-up from last year’s event. If you’re there or in and around Berlin, get in touch and come say hi.
As you would have seen from the last project Gantt chart, we’ve split the work toward Fleming into phases: four high level ‘vault’ phases + node ageing. Each of these phases gets broken down further internally, to provide us with more clarity and scope, but the high level overview keeps us in the right direction.
This Phase 1 release is the culmination of some of the ‘building blocks’ of the network. The finished network will be made up of three ‘building blocks’ : clients, vaults, routing. Phase 1 includes the foundations for clients and vaults. The release will be a single vault which simulates the network on a single machine, and you’ll have the new data types and the SAFE CLI to play with.
With the safe-cli build and release process already set up in Jenkins, and our first use of the
self_update crate implemented, tested and merged, this week the DevOps/QA team switched our attention to the closely related safe-authenticator-cli and migrated the build and release process for this to Jenkins. The safe-authenticator-cli and safe-cli go hand-in-hand so it’s important that these two are configured consistently and able to be released concurrently when required. We also investigated building both safe-cli and safe-authenticator-cli with musl libc but came to the conclusion that there were dependencies which complicated this switch for now - one for the backlog. For more on musl libc, have a read of today’s
Vaults section below.
DevOps/QA also investigated and implemented the creation of a nightly build of safe-authenticator-cli so that it can be used by the safe-cli CI, saving the need to build it from scratch every time CI runs. This has been be set up through Jenkins and should trim a decent amount of time from the current CI runs. The quicker CI runs, the quicker we get feedback on the build, the quicker we release
Next, we will be looking to add
self_update to safe-authenticator-cli. This uses the
self_update crate to pull in new releases from our GitHub release page, if any are available, when triggered by the user with the
update command from the CLI. This saves the community from having to download and run the latest version of the authenticator CLI, instead we should just be able to ask you to enter a simple command into your terminal and it’ll update itself.
We also added a minor feature to the safe_cli which will be very helpful for looking at the current state of an NRS Map container. Using the
cat command, it is now possible to use the
--info flag to obtain a complete list of the subnames created for a public name and their corresponding metadata like the linked content. We’ve updated the “SAFE URLs” section of the User Guide with an explanation of how this can be used, and an example of the type of information retrieved with this flag.
Late last week we already started with the changes required in both safe_cli and safe_auth_cli to upgrade the SCL libs to the very latest version to be available very soon. This will allow us to use the CLIs to not only keep using the mock-network, but also to start connecting to the vaults, which will allow us to use CLI to test the upcoming network releases.
Some good progress has been also made in our first attempts to have a set of Node.js bindings for the new Rust APIs (the high level APIs we are creating in the safe_cli). This will allow us to have Node.js apps use them, as well as our SAFE Browser and eventually expose them for webapps to consume. This time we are giving neon-bindings a try as it seems to be simpler to maintain, instead of using node-ffi. Once this is achieved we will be able to start expanding these Rust APIs to cover many more use cases and cover all the new data types.
SAFE Network app
The SAFE Network Application continues to coalesce. We’ve been adding in more of the actual application styling (as opposed to being a rough, yet functional, developer version). The PR there has a few tweaks but is looking very good. We have the start of the onboarding process integrated, and while it’s simple at first, this will eventually form the basis of folks first interaction with the network, including account creation steps, and logging in to the network. We’ve also integrated install/download functionalities to the UI, so you now get visual feedback on progress and can easily pause/cancel updates too. As these strands all come together we’ll be working towards an internal release candidate for testing next week all being well.
SAFE Mobile Browser
This week we worked on some new features for the mobile browser. One of them is to enable the user to share the log files. This will go a long way in diagnosing bugs that users might encounter and when shared with us, will result in quicker bug fixes. Also, we have almost completed the dark mode implementation for both platforms (Android and iOS). We are now focusing on some other minor bug fixes before we hand over the app to the QA team for the final testing.
SAFE Client Libs
SAFE App has been ported to use the new data types. The existing API using the old MutableData has been changed to use the new sequenced MutableData (we have not yet started on support for unsequenced MD). This has brought us closer to being able to remove the old API and data types from the codebase entirely. The PR is being reviewed now.
We released several versions of safe-nd, our data types crate. The documentation has been greatly improved and is easily accessible now. We have also stabilised and frozen the API, meaning there will be no more breaking changes before we move to Phase 2 of the Vaults project, but we may add new functionality periodically that we deem useful enough. We released version 0.2.0 this week which we plan on staying backwards-compatible with, and subsequently released 0.2.1 which added new public functions. This latest patch-version release, while making only minor additions, allowed us to significantly simplify some
AppendOnlyData test code by a magnitude of four, a worthwhile tradeoff. Otherwise, however, we plan to keep changes to this crate at a minimum.
We cracked a subtle bug related to connection mappings that was preventing the last step of connecting SAFE Client Libs and Vault. The Vaults client handler maps socket addresses to the client identities (public keys). Sometimes, when an older connection with the same public key lingered on, the Vault picked up a wrong socket address to send the response to, and the ‘real’ client never received it.
After quite some time invested (shoutout to @fraser, @ustulation, and @nbaksalyar) it is now fixed and we are proceeding with the integration testing, which also helped to uncover another subtle bug in quic-p2p: for efficiency reasons we don’t serialise binary messages over 1 kilobyte, and on the receiving end we always assumed that we don’t need to deserialise larger messages. However, sometimes the binary serialisation format added some extra bytes of its own, so quic-p2p wrongly assumed it’s a non-serialised message, resulting in a subsequent failure. We have fixed it now and we’ll be releasing a new quic-p2p version with these fixes shortly. With all that, now we’re able to run the full Client Libs test suite with the local Vault, and the early results look promising!
Integration testing the local Vault with the SAFE Client Libs has uncovered a slight discrepancy in the way we implemented the Safecoin balance creation logic. Mock Vault allowed to use a test function to create any account with any key and balance, while the local Vault just allowed all requests to pass through, even if a client didn’t have any balance at all. While both approaches make sense in the testing environment, we decided to converge on a logic that would be closer to the real network environment. Now, we allow you to create arbitrary Safecoin balances out of thin air, but only if you’re doing this for yourself (i.e. for the client key you use to connect to the Vault). To make it even more magical, we don’t limit the number of Safecoins you can create for yourself, so be prepared to test this feature and have fun!
In the last week our DevOps/QA department completed work to migrate the bulk of safe_vault CI over to Jenkins. While doing so they also automated the release process so now when we merge a version change PR it will trigger a release not only to our GitHub release page, but also publish to crates.io Up until now these release steps have been done manually so having them automated not only improves speed and frees us up to work on other things, it also helps to remove the risk of human error.
Our DevOps/QA team also switched vaults over to build with musl libc rather than glibc. This means that the vault binary should work on a much wider range of Linux distributions than it would have previously, so less support headaches
Not content with this, we also added
self_update to the latest vault code. This uses the self_update crate to pull in new releases from our GitHub release page, if any are available, every time the vault is started. This is intentionally different behaviour to the specific
update command required for the safe-cli and safe-authenticator-cli. This saves us having to pester the community to download and run the latest version of vaults, instead we should just be able to ask you to restart your vault and it’ll update itself. Pretty neat for rapid progress through the test phases, I’m sure you’ll agree
Secure Message Delivery
We now have the first part of the end-to-end solution: Section messages are now signed and verified on receipt. With all the tests passing, we know the Secure Message Delivery solution is sound.
We are building on top of this initial solution, and solved another problem that was causing us difficulty with previous schemes: We can now prune the proof that we send and ensure messages do not fail validation. Finally, we are cleaning the code from remaining parts left over from the previous scheme.
With these 2 big items out of the way, we were able to flesh out the remaining work for Secure Message Delivery and add them to the board. We are making good progress in completing these remaining items. One of the bigger items is an investigation into a recent test failure discovered when running with real Parsec instances. We have identified the problematic area and are looking in more detail to identify the exact cause of the failure.