This week, the marketing team have been focusing on new content. So far, we’ve got a new Medium post exploring the new data types (although this was technically published last week ) which aims to break down some of the data variations and how they may fit together to create something spectacular. We’ve also published a follow up to the Phase 1 Vaults release, which goes into a little more detail about what’s included (primarily focusing on the new CLIs). We haven’t re-posted these yet on the Forum so if there is anyone that wants them on here, just holla and we’ll get that done ASAP!
This morning, we sent out the first of September’s newsletters, this one tackling deepfakes. If you’re not signed up to our newsletters, then why not?! Head over to safenetwork.tech to receive this in your inbox fortnightly. And…a tweetstorm focusing on a bunch of stories from the past week or so.
In between producing all this new stuff, we’ve been working out the marketing focus for the next few months, with the plan to steer the direction of focus primarily on DApp developers. With the new CLIs and the Vaults work well underway, it’s the perfect time to reinvigorate some of this work plus bring new developers into the fold. We’ll share our plans in due course.
Vaults - Phase 2
Vault Phase 2 planning has moved on to implementation
Phase 1 (single, real vault) has settled and we are full steam ahead with Phase 2.
As you’ve seen, our implementation approach is to create the minimal viable product possible, deliver and then iterate on top of that. As such Vault Phase 2 has been split into two iterative phases. Phase 2a will extend Phase 1’s single vault architecture, to have multiple vaults, but have only one section. Phase 2b will build on this to have multiple vaults and multiple sections.
This is a super exciting phase because this is where the rubber meets the road. This is the part where we continue toward creating the decentralisation of the network and integrate the consensus mechanism.
We have a number of Epics outlined in the project plan, covering the high level requirements, with the biggest piece being integration of vault to routing’s PARSEC.
Because there is quite a level of complexity with Phase 2, we are only moving to detailed planning as we finish one Epic and approach the next. So although the Epics will largely stay static, as more detail is worked through and tasks scoped, these will be added to the board and therefore you’ll notice more cards as we move through this work. But of course that will mean we will have completed a whole bunch also
The Vaults team have started on the initial items of adding mock routing to Vaults before we can integrate with real routing.
The other line of development is in routing, to handle Parsec Pruning. This is required to ensure we do not continually increase our memory consumption, and the amount of data we need to transfer to new joining nodes. To build this, we will leverage the existing mechanism that creates a new Parsec instance in case of section splitting.
One other vault related update for this week. During testing @karamu discovered an intermittent bug where a file which had appeared to be uploaded to the shared vault was not able to be retrieved. After some investigation @nbaksalyar was able to diagnose the root cause: the shared vault is hosted on Mac hardware which occassionally purges the tmp data and thus can result in loss of immutable data chunks. The issue is one of configuration and today we have made the necessary changes in the shared vault config to address the problem. A proper fix in PR form will be rolled out in due course. To connect to the shared vault you will need to update your vault_connection_info.config file to match the latest in our GitHub release. See the original Vault Phase 1 (real vault) release post for full instructions on how to connect to a shared vault, or how to run your own vault.
After last week’s release of SAFE CLI v0.3.0, we resumed our development tasks towards having the High Level API exposed in other languages, like NodeJS. We were able to make some good progress even though this is still in a very early stage, but we were able to create the NodeJS bindings for the
fetch APIs, and already use them from the SAFE Browser. We’ve created a first project board for the NodeJS binding effort with the few first tasks we aim to achieve.
On the SAFE CLI side, this week we worked on encoding the Media-type (a.k.a MIME-types) in the XOR-URLs generated when uploading files to the network. We already had the reserved bytes for it in the XOR-URL encoding but we weren’t making use of them until now. This will allow applications, like the SAFE Browser, to treat files accordingly when they obtain the media-type information when fetching it using a XOR-URL. Thus, this is now being done by the
fetch HL API, as well as by the
$ safe cat command when
--info is provided.
We also started implementing a new high level API (and its corresponding
$ files add command) which allows users to add a file to an existing
FilesContainer. This will add a single file (and optionally overwrite if it already exists) to a
FilesContainer either from a local path/location, or from a safe:// location. The latter will cover scenarios where some files have been uploaded to the network and we want to link them from some other
FilesContainers as well.
You can probably already realise this is our first step onto allowing these types of operations with safe:// content, e.g. we imagine in the future being able to sync a
FilesContainer with a safe:// URL as the source location, in addition to just being able to do it with a local folder/path.
SAFE Network App
SNAPP has been on the receiving end of a PR which fixes a wealth of styling and component level issues. We’ve also been finalising the auto-update procedure here, with some final tests with the CI build systems underway at the moment.
We’ll soon be applying these learnings to the SAFE Browser desktop app, which will then enable application updates to be managed from within SNAPP itself.
SAFE Desktop Browser
The POC browser has been tidied up, and now becomes our
dev branch. This involved fixing a lot of post-electron-upgrade issues, as well as test fixes and tweaks now that we’ve removed the olde baked-in authenticator.
dev branch is now looking sleeker than ever. With the new NEON API fixed and working on Windows now (the previous release now has a Windows package!), and our E2E tests now working on all platforms (although not on Windows CI just yet). This was never possible on Windows before due to several fun Windows ‘quirks’. So this is all looking very positive.
With Windows fixed, we’ve turned our attention to the new
safe_nodejs library which we’re building atop the CLI exposed dev APIs. We now have the main APIs exposed, with NRS create/update and
files create/sync as well as the
fetch which was already in the POC. So naturally, we’ve started by exposing these APIs to the DOM, which is working nicely now we’ve nailed down our packaging problems, and we’re also working on some nice UI enhancements to enable easier Public Name registration too.
All of which, you’ll find in in this PR.
On top of such browser integration, we’re also looking at packaging the
safe_nodejs library to help speed up application build times (currently building from source has added something like 45mins to our CI test times, so we’re pretty keen to bring that back down). Initial tests with
node-pre-gyp are looking promising though!
SAFE Mobile Browser
Pull to Refresh feature PR #137 has been merged This feature enables the user to reload the web page by using
pull down gesture without using the reload button in the menu popup for both platforms (iOS & Android).
We are now trying to set up the automated UI tests to the mobile browser repo using the Xamarin.UITest tools. UI testing is a critical component in identifying whether any existing feature has broken or introduced new bugs while adding new features or fixing any another issue.
Automating the UI testing ensures testing is done faster and more often, and we don’t have regression issues - especially as the project complexity steadily rises in addition to the number of projects. Next week, we’ll be configuring the CI to run the tests on multiple devices with varying API levels/OS versions and hardware specifications across multiple platforms, to ensure a consistent user experience for all users.
SAFE App C#
This week we spent some time looking into
safe-cli code and exploring how to expose the FFI bindings from the
safe-cli to provide the API for the C# using the same code base.
Our aim is to have a setup which will be used to provide the .NET API for functions like uploading and fetching of data from the new vaults. Once we have the basic setup in place we will expand the API to cover the other available functionalities which include working with new data types, keys, XOR-URLs, NRS, and wallet.
SAFE Client Libs
As well as preparing for the upcoming Vault Phase 2 milestone work, we’ve resolved a number of issues. One of the issues getting resolved in PR #1034 is regarding permissions for applications sending mutations to the network on behalf of the user. Previously, applications had
transfer_coins as the only permission gate for mutating data, reading coin balances and transferring coins on behalf of the user. We felt this should be granularized into specific permission gates for each specific operation. Hence we decided to add a couple more permission gates namely
@marcin raised a PR re-enabling non-mock builds, which are working again after all the churn in SCL, in Travis, which we use for Continuous Integration. In addition, this PR fixes and turns back on the binary compatibility tests, which also were disabled, and includes a small refactor while we’re at it. Finally, this PR removes the line of code which ran
cargo check before
cargo clippy in our
clippy-all script, which was necessary due to a bug which we had run into and reported. A very nice description of the bug, its investigation and its fix can be found here.
We have completed the BLS planning stage along with Phase 2 items. This project has now resumed from where we paused it, and we can now leverage previous work done to Parsec to offer BLS Distributed Key Generation (DKG), as well as the clean up done through the Secure Message Delivery(SMD) project, to start generating real BLS Section keys and use them throughout routing.
We have started an initial change to the threshold of BLS from quorum to one third, which makes the emulated BLS more compatible with the real BLS.
Secure Message Delivery
We completed the last mandatory task from the SMD board We added more tests to ensure messages are being properly secured and invalid messages are being detected and rejected. There are some optional tasks for SMD left over, which are paused for now and will be addressed as time allows.
To complete the SMD implementation so it matches the RFC, we will need to wait until the BLS project is completed. At that point, we will be able to do the final QA and sign it off. As such, we will keep this board open with a single remaining item.
Feel free to send us translations of this Dev Update and we’ll list them here.