SAFE Network Dev Update - July 30, 2020

Summary

Here are some of the main things to highlight since the last dev update:

  • We are working on an automated process that creates nodes, joins an internal testnet, stores data and churns the nodes. This will allow us to scale up our testing of sections.
  • We have finalised a first implementation (PR #186) of supporting concurrent Policy mutations with Sequence items mutations and can now start making the changes in safe-vault and client libraries to adapt to some minor changes we made to the Sequence requests types.
  • Community member @happybeing has been investigating FUSE filesystem options in Rust, and has started a draft document proposing a safe-fs API. Discussion is ongoing here.
  • We have implemented a full continuous delivery pipeline in our safe-nd Rust crate.

Vaults Phase 2

Project plan

We’ve been continuing our study on the memory usage of vaults and we have some interesting results. The issue we uncovered isn’t something that was happening throughout the execution of the process. The issue was because the OOM killer terminated the vault process before deallocating the memory in use by the vault. So it boiled down to looking at the different components of Vaults and their memory usage.

We identified two components with high memory usage. One of them is PARSEC. We are already aware of this and as you will be aware, its removal and replacement is already in progress. The other component with reasonably high memory usage was quic-p2p. We dived into the crate and identified that we were sending a copy of all messages back to the user of quic-p2p. This is useful when the sending of a message fails, but, for successful sends we do not need to send back a copy of the message. Removing this alone reduced the memory usage significantly.

We will continue working on such fixes and improvements and carry on with testnets to help identify issues in the network and its performance. However, we will be keeping these testnets internal for now. We are working on a testing setup that creates nodes, joins the network, stores data and churns too. This will allow us to scale up our testing on our testnets and find weaknesses ASAP. This will be fully automated so we don’t anticipate needing to run manual community testnets in the short term. We anticipate that the next community testnet we do set up will be feature-packed with the new kids on the block, i.e. CRDT, AT2 and farming, or at least a subset of these.

SAFE Browser / SAFE Authenticator (mobile)

Browser Project Plan
Authenticator Project plan

We refactored the authenticator app to reuse the authenticator APIs from the NuGet package and drop its own implementation of the FFI wrapper. This step removed lots of redundant code, tests and trimmed down the CI/CD setup.

In parallel, we updated the mobile browser to support the latest CLI/vaults. We tested both apps against locally running testnets and fixed a few issues we found during tests. There are few changes pending in the authenticator app. Once completed, both apps will be ready to use with the new vaults.

SAFE API

Project plan

There hasn’t been much activity on this repo this past week, only some maintenance activities. Nevertheless, we will soon be trying to implement some bug fixes as well as making sure it’s up to date with changes being made to the client libraries.

CRDT

We’ve finalised a first implementation (PR #186) of supporting concurrent Policy mutations with Sequence items mutations. This first approach allows different branches of the Sequence to be created for each new Policy that is set, e.g. if a client is working offline making data mutations without being aware of a new Policy being set to the content, when the client goes back online and broadcasts such operations to the network, they will still be applied but in a new branch of the Sequence. When retrieving the items from a Sequence, the items corresponding to the “main” branch will be by default read, but eventually we can allow clients (through a different type of API/request) to also retrieve items from any of the other branches that could have been formed from each previous Policies.

This first implementation is giving us a good start where we are covering all the potential scenarios where clients are appending items to a Sequence while some other client/s may be setting a new Policy concurrently. From a user perspective, this can be seen simply as a Policy mutation always winning over a concurrent data mutation, since such data mutation will still be applied but as belonging to a branch formed from a previous Policy, which is the Policy that such a client would have been aware of when sending the data mutation request.

We will now start making the changes in safe-vault and client libraries to adapt to some minor changes we made to the Sequence requests types. E.g., one of such changes was merging the ownership and permissions to be all part of a Policy rather than being separate attributes of a Sequence. Changing ownership or setting permissions for other keys is now made by simply setting a new Policy for the Sequence. We will anyway work on exposing APIs which allow users to have more flexibility and granularity for changing ownership and setting permissions to a Sequence.

tree-crdt

Work has continued on the experimental tree-crdt code described last week. We implemented the suggested execution optimisation from the crdt-tree paper as well as an index to quickly lookup a node’s children. We added lamport+actor timestamps, which means this code can now work correctly when running from separate processes (without sharing a clock). We also fixed an issue with duplicate operations entering the log if they have the same timestamp, plus added several test cases.

Most recently, we implemented log truncation (discarding old operations) as described in the paper. This prevents the operation log from just growing forever. For correctness, this method requires knowing the full set of all replicas, which is problematic for SAFE Network usage, considering each user-agent is a replica and there is also vault churn. Further investigation will be needed to find a good enough truncation strategy, but this does not seem a showstopper and can be revisited later in the development process.

In collaboration, community member @happybeing has been investigating FUSE filesystem options in Rust, and has started a draft document proposing a safe-fs API. Discussion is ongoing here.

Transfers

SAFE Transfers Project plan
SAFE Farming Project plan
SAFE Client Libs Project plan
SAFE Vault Project plan

As has been noticed on the forum, there’ve been a lot of things ironed out with regards to paying for data, farming algorithm, and rewarding nodes for storage of data.

We are working intensely on confirming there is no regression after the integration of transfers, farming and the resulting messaging / modules refactor of safe-vault, with the aim to quickly move over to this as the working branch. Work has started to update SCL to be compatible with these new farming vaults as a load of structural improvements have been done to the vault modules and network messaging.

Routing

Project Plan

Since last week, we have focused on some feature refactoring tasks to prepare for PARSEC removal. These include the resolvement of an issue to improve message signature accumulation to not implicitly require PARSEC (which is mainly addressed by the Routing PR to notify lagging elders and the safe-network-signature-aggregator PR to avoid mixing different public_key signed), and on-going work on an issue to change NeighbourInfo sending and receiving to not require accumulation (covered by the Routing PR to remove SendNeighbourInfo votes). There are not many feature refactoring tasks left over on our TODO list, and so the formal work of PARSEC removal is expected to be started soon.

Some other good news this week is that thanks to the newly released 0.4.0 threshold_crypto, there is no longer a need to support different versions of the rand crate. The merged PR update deps and refactor code to simplify rand crate usage took away one more unnecessary complexity.

Continuous Delivery

We’ve been on the cusp of enabling continuous delivery (CD) for a while now. The SAFE Browser has automated releases, but we’ve been battling GitHub Actions in order to automate version bumps, changelog updates, tags and releases.

In the last week, we’ve taken some of the learnings from the SAFE Browser and applied those to one of our Rust libraries, with the aim of getting CD going there. This was a bit of a slog with a fun array of issues to contend with there (you cannot automate pushes to protected branches on GitHub, can’t review your own pull request, can’t get the commit message of a PR easily). Buttt, we’ve overcome all of this and have, at last, a new action for our Rust repos to automate version bumps and generate changelogs (from conventional commits). This automatically generates a PR for the version bump, and we then have another action that merges that. Which then (once again in master) will tag the new version and kick off releases for us. All automagically :mage:

It is very satisfying to finally have this process in place, working, and a decided improvement over the need to manually manage version numbering, changelogs and releases across repos. We’ve implemented this CD process in the safe-nd crate initially, with the intention of giving it a little time to bed in and ensure it all works smoothly, before rolling it out to other libraries.

Useful Links


Feel free to reply below with links to translations of this dev update and moderators will add them here:

:bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the SAFE Network together!

:love:

71 Likes

:davidpbrown:

Second is not first :stuck_out_tongue:

That makes great reading… that’s a lot more progress that I’d expected was possible for that kind of problem.

:+1:

28 Likes

Second!!! blh blah blah 20 chars

20 Likes

High memory usage and memory leaks are different things.

14 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :racehorse:

21 Likes

Thank for another comprehensive update. Good progress being made although I personally am unhappy that I wont have a new testnet to try and break any time soon.

It will be well worth the wait…

Excellent work folks, thank you.

18 Likes

Would depend on the person making the statement, the terminology they use, and whether they are looking at identified problems, or effects of said problem without sound diagnosis.
Or so I believe.

10 Likes

Sweet :kissing_heart:

Thx for the update Maidsafe devs…

Always like reading this :clap:
:vulcan_salute:@happybeing :stuck_out_tongue:

23 Likes

Oooo!!! Exciting!!!

Great update. Thanks, MaidSafe Crew and @happybeing too!! :smile:

26 Likes

You know, always when there is a problem - like CRDT policies, or that memory stuff - and Maidsafe team says, that “we already have good ideas how to progress from here”, then I always think, that “Yeah, so they say, but in reality they are lost, just try to make it sound better.”

…But they freakin’ do have the ideas! It’s never just lip service.

Ha, I actually read the discussion linked above. Hard to explain why, because I understood like 1% of it. But somehow I get entertained by that kind of stuff - like these Dev updates too. I guess it is a bit like watching a skillful craftsman at work.

So thanks for the team and @happybeing for my weekly dose of entertainment once again!

20 Likes

Still long way to go!!! It will atlest take 4 year

1 Like

Since you seem to know this for sure, would you care to explain a bit more in detail what lies ahead, and why it will take at least 4 years?

7 Likes

Do you work for the team?
Do you have some insider knowledge?

Or are you stating opinion as fact?

Never mind.
11 hrs reading and you think you know whats going on.

8 Likes

Yup, there will be updates for decades I hope

6 Likes

Great job team!! This proyect is the mother of all proyect, but , i am unable to understand how being such huge proyect, solid and reliable, it is not listed in more than 2 exchanges!!!

18 Likes

Most ppl think we aim to big, can’t do it.
But we get closer every day.

As for exchanges, once we release, we will be OK.

15 Likes

SAFENetwork will be an end run around many existing projects. If you’re going to go for it why not go big :slight_smile:

14 Likes

Lots of delightful morsels of progress as usual! Thanks Maidsafe team and also @happybeing with the continual work you do to extend the project :pray:.

Having the CD really bed in is going to progressively free up more time to focus on higher lever work. Stoked :grin:.

Amazing update overall. Keep going team, your efforts and dedication are much appreciated from many.

{m3}

17 Likes

SAFE project preceded them all, so not “end running” around anybody, just aiming at a much more ambitious goal.

10 Likes

Great point and clarification!

3 Likes