Here are some of the major areas of focus this week:
safe_client_libsversion updates and Alpha 2 compatibility
- Java and JNI bindings generator
- C# bindings generator
- Merge and split scenarios in the context of Data Chains
- Migration of the
p2pcrate to a
Tokio+ futures based API
SAFE Authenticator & API
The tasks to update
safe_app_nodejs to make use of the master branch of
safe_client_libs have been completed and it’s ready to start the internal testing. All API changes and the new functions for sign keys have been integrated. We will now update the example applications to work with the latest version of safe_app_nodejs and also use them for testing purposes.
The changes which affect the API are the following:
Permission-set objects no longer exist on the front-end, they’ll simply be passed as a clean array:
const permissionSet = ['Insert', 'Update', 'Delete', 'ManagePermissions'];
You’ll no longer have to work with MutableData
Valuesobjects. For example, you can now simply run
app.mutableData.getKeys()to return a nice array of keys associated with the respective mutable data entries.
See merged PR for full details, https://github.com/maidsafe/safe_app_nodejs/pull/166/files, while we update documentation.
The DOM API is being updated to reflect the changes made in
safe_app_nodejs. The documentation of the new APIs will be updated and published as soon as this is complete. As you can imagine, some of these changes will reduce the number of handles the DOM API needs to expose, and that should simplify the web apps code to a certain degree.
The test suites for both the DOM API and
safe_app_nodejs are in the process of being updated to reflect recent
safe_client_libs changes, for readability, and to perhaps act as another source of practical documentation. We are also working on adding tests which can verify that invalid input parameters are handled correctly.
We just started with some refactoring in the
safe_app_nodejs code with the intention of decoupling the NodeFFI binding functions, which take care of the interactions with the
safe_client_libs, from the functions that are exposed as the API of the package. This should allow us to then be able to use different types of bindings mechanisms as we find it necessary moving forward, e.g. we could create different versions of the library which interact with the safe_client_libs using the Rust FFI, WebAssembly, or any other possible way.
safe_app_java repository has been updated to use multi-project setup and CI scripts have been integrated. We’re waiting for the JNI bindings to be ready for integration. Meanwhile, @joy is looking into methods of distribution. @shankar is iterating the dev website prototype based on the initial feedback that was passed by the internal team. We will share the prototype once the final version is ready.
A test suite for the existing APIs was integrated for the safe_app_csharp repository. Only a few functions that were needed for the messaging app have been exposed as of now. @rachit and @krishna_kumar will start implementing all the APIs and corresponding test cases to make it feature complete.
The custom browser development is continuing well. The prototype protocol handling was functioning well, but it required running the old SAFE Browser for authentication. Now we’re moving on to integrating the authenticator to remove this dependency.
SAFE Client Libs
The highlight of this week is our work towards getting an Alpha 2-compatible release of a new version of the SAFE Client Libs API. Currently, we’re trying to check that all recent changes in the
master branch of SAFE Client Libs are working nicely with the existing network data. Besides manual testing, we intend to add some automated integration tests that will try to connect to the real network and use an existing account to do basic tasks: this way, we can be sure that developers can use the new API to build new apps and upgrade the existing ones without resorting to mock-routing or waiting for a full network reset. Once we’re confident about it, we’re going to remove the
alpha-2 branch and make
master the only upstream version.
As we move forward, we’ll be dedicating more time to the problem of binary data compatibility. We’re starting with small steps: @marcin has been developing an update policy for our dependencies, some of which can change their binary encoding from version to version. We’ve already stumbled upon this with our compression library, brotli2, and the same might happen with e.g. our serialisation library (serde). As updates to binary encoding are breaking changes, we decided that we’re going to freeze versions of such critical libraries. In addition to that, we’ll be more careful with future changes and general dependencies updates. The outline of this policy can be found in this pull request, which will be revised and adopted soon.
In the meantime, we’re continuing with the bindings generator, which is progressing very well, but still requires more time to figure out the small details and catch bugs. While @nbaksalyar is wrapping up JNI and Java bindings, @adam has started on the C# bindings generator. It’s shaping up nicely and some preliminary results are available in this repository. The major remaining thing to be done for both languages is testing and proper callbacks support. Once we’re done with that, with the help of the front-end team we’re going to start integrating automatically generated bindings with the Android and Xamarin projects.
Routing & Crust
The Routing team has been focused on working out the details of how merges and splits can be effectively handled in the current design of Data Chains. We are considering various scenarios and different flows, trying to find a suitable approach that plays well with eventual consistency patterns. This is ongoing and we’re hoping to get things firmed up for merge/split handling soon.
The task of figuring this out requires a thorough understanding of the network fundamentals and Data Chains features. We’re also looking to factor in quite a few drastic scenarios such as high node loss in short periods, restarting the network with the ability to republish the data that has been stored on it… It is important to make sure that these features will be possible to implement on top of Data Chains, and many details are quite tricky to get right and ensure they function together without affecting a different part of the system as a side-effect.
We’ve been planning to replace
Crust NAT traversal code with a dedicated crate - p2p. But before that, it had to be ported to a
Tokio + futures based API. And that finally happened and got merged into the master branch. This concludes our 2 week long work - that was a complete rewrite from
mio based code. Now we’re able to integrate the
p2p crate into
Crust and the work has already been started. The end result will be more reliable NAT traversal with a reduced
Crust codebase. Since
p2p implements both UDP and TCP NAT traversal and µTP is ready,
UDP support for
Crust is not that far away.