One of the changes we’re working through this week concerns the updated version of
libp2p, which brings some welcome improvements to AutoNAT - detecting and enabling nodes that are behind a firewall or router to join. We will soon be able to distinguish between global and private IP addresses, which was an issue before.
Inevitably, however, it also brings some fresh bugs to keep us on our toes. So most of our work this week has been under the bonnet, tinkering with tappets rather than polishing up the front end.
That said, @ChrisO has now updated the logging interface, giving storage and formatting options to folks running a node. Users can specify the log output directory or standard output if they wish but by default these are now:
Linux: $HOME/.local/share/safe/node/<peer-id>/logs macOS: $HOME/Library/Application Support/safe/node/<peer-id>/logs Windows: C:\Users\<username>\AppData\Roaming\safe\node\<peer-id>\logs
Client logs are stored by default in the equivalent
There’s a choice of formats too. JSON can be specified via the CLI if desired.
As you will know, there are three maxims we follow as we integrate our work with
libp2p. One is leave it to Kad where possible, because there are many more eyes on the
libp2p code and we want to standardise on that wherever possible. The second is leave it to the client - the less the nodes have to do the better. The third is to maximise security - everything should be signed and data should be encrypted. We find that these are currently contradictory. @Anselme has been looking at registers which are not secured by default on creation of
Kademlia Records, making them vulnerable to bad nodes. New records are also not CRDT on Kad, so race conditions are possible. Subsequent writes are CRDT so no problem there. Anselme thinks he has come up with a solution to both these problems.
@Qi_ma and @joshuef are working through another issue around how when a client stores chunks it finds the closest nodes and routes its data. We feel the client should be doing more of the work. We want them to repeatedly request the network for the closest nodes and when they get a patch then put the chunk, rather than a full discovery process.
Our current preferred approach is this feat: Using put record for upload by maqi · Pull Request #482 · maidsafe/safe_network · GitHub , which puts us back into using the Kademlia
put implementation, though now we have it verified at put time. This cuts out avenues for bad nodes to do sneaky things and simplifies the code around PUT a fair bit too.
Uploads so far are a little slower, but we are sure they can be optimised down the line if needs be.
Payment processing is another issue we’re looking at including through network owned DBCs which would mean we don’t have to verify that payments went to valid nodes at the time (since we no longer have section-tree + history there) this simplifies things a lot. And down the line we’ll be able to disburse rewards from the network itself.
And we’ve been looking at the question of restarts and updates. A failed node should not keep the same key, but a valid node that crashes during an upgrade, for example, should be able to recreate a valid ID and keys from a previous key.
@Bochaco is working through the logic of DBCs and payments to the network.
With the Libp2p upgrade, @Benno is bug hunting and testing AutoNat capabilities.
As well as fixing register security, @anselme has upgraded the faucet functionality.
Feel free to reply below with links to translations of this dev update and moderators will add them here:
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!