Vaults Phase 2
We did promise you a new toy and today it has been delivered
We have been testing and iterating furiously all week to perfect the previous months of work that went into producing a testnet which supports vaults from home. The full details of this release, along with instructions on how you can participate, can be found in this post.
This new testnet includes some exciting features that we have been working on so please head on over and give it a shot either as a member with a vault of your own, or even as a client creating data and performing transactions that we can see on our vaults.
We will now be fully focussing on the next steps which primarily include section splits, data organising and client request processing across multiple sections.
Another milestone reached
Work continues on symlinks. The final step is to enable resolution of symlinks in a
SafeUrl path. Most programming languages have a realpath() function (or equivalent) that strips out
./ and replaces
../ and each symlink in the path as appropriate to generate a final real aka canonical path. This week we have written a realpath() function that understands the metadata in a
FileContainer and is called within the
Safe::fetch() API so it will work for all
SafeUrl resolution lookups. Some improvements and testing remain, but hopefully a PR can be ready early next week with full symlink support.
This past week we focused on the client-side of the Sequence CRDT, starting with creating some initial E2E tests on the safe-api crate, and running them against a Baby Fleming section. We were able to have very basic scenarios working for creating Sequence content, appending and retrieving data from it. We’ve also migrated the FilesContainers and NRS Containers to make use of Sequence CRDT (instead of the AppendOnlyData type) and all E2E tests are also passing against a Baby Fleming section.
We also worked on adding an LRU (Least Recently Used) cache on the client side which is where the local CRDT replica is being held. We will have to research more about what kind of strategies we can use in terms of updating the cached content, e.g. if an application is using some content shared with other clients it may want to refresh every so often, even though its own mutations will always be merged on the network, but we’ll be researching more about the possibilities related to this after we have all basic scenarios for a single active client fully functional end-to-end.
Lastly, we started converting the other parts of the Sequence data type to also become CRDT, i.e. the list of permissions and owners that is kept within any Sequence instance. This will allow us to then test some more complex scenarios, such as two different authorised clients making mutations on the same content, and all the CRDT magic happening when concurrent appends/mutations are merged on the network with no conflicts.
SAFE Client Libs integration has been progressing well during the last week. We’ve updated the core library for the new AT2 style transactions and have started with the basic integration of a
TransferActor. This will be a wrapper struct abstracting away a lot of the request logic and management that were previously handled in the client layer APIs, giving us a bit more modularity there. We’ve started unit testing this, and are progressing with updating tests across the core lib in its entirety.
It has been a pen and paper week for AT2
safe-transfers. We are now designing and setting up a mechanism (
SectionActor) which takes responsibility for the
Money that is paid by the clients for the data they upload. A Section in the SAFE Network would have its own account which is maintained by this distributed
SectionActor, situated at the Elders of that section. When a client pays for uploading data, the money gets credited to the data-storing section’s account, and the instances of the
SectionActor are now responsible for distributing the rewards to the nodes that have done the work (of storing). This work ties in with the farming, where we are currently testing granular rewards with batched payouts. We are improving and refining this flow as we proceed ahead with the implementation simultaneously for this all across the board.
Parsec removal work is going ahead at full speed. Last week we mentioned we are making all the data in the shared state signed with the BLS signatures to make them self-validatable. This work hit some unexpected complications earlier this week and so is taking longer than we wanted. But the hardest issues seem to be solved now and what remains is relatively straightforward. Expect a PR soon.
We are also almost ready to replace the DKG (distributed key generation) module with the BLS-DKG crate. DKG wasn’t originally meant to be part of Parsec (there is no mention of it in the whitepaper) but it was tacked onto it because it was the simplest way to implement it at that time. It has some serious limitations though, so this replacement is definitely a big improvement, regardless of the Parsec removal itself. The PR is currently undergoing review and there are still some minor bugs to squash, but we expect it will be merged soon.
Feel free to send us translations of this Dev Update and we’ll list them here:
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the SAFE Network together