It’s getting a bit repetitive, but once again this week we can report that the
NodeDiscoveryNet testnet is still up and running. A bit rough round the edges and in need of refinement, sure, but the foundations are feeling very solid. This stability is no longer a surprise, but after many years of excitement as we’ve attempted to make this thing fly, frankly this is the type of boredom we can live with.
Among the tweaks resulting from the testnet findings, we are improving the error messaging to users when a node fails to connect properly. Currently, when this happens there are no obvious signs for the user, who has to dig into the logs - although the lack of chunks is a giveaway.
Most connection failures are a result of trying to connect to inaccessible peer addresses. We’ve also seen far more connections that you might expect to valid addresses (given that libp2p offers multiplexing). More than a handful per peer should not exist at any one time, but we’ve seen hundreds! After some digging, this turned out to be a feature (not a bug…) of libp2p, just not one optimised for our use case. @bzee reached out, and Max Inden of Protocol Labs kindly came up with a patch which has seen the number of connections fall from dozens to just six or seven. Thanks Max!
We found that nodes are doing a
get_closest check every time a new node is added, whereas they should only do this when they first join, so that’s some more overhead we’ve shaved off. There will be more.
In addition, we’ve been looking deeper into register security, considering what would happen if an attacker instead of trying to change data in a register (virtually impossible without the correct authorisation) just replaced the entire register - not impossible with our current setup. We are working through the best ways to fix this.
@joshuef has made some tweaks to the replication flow, including one that shuffles data waiting to be replicated/fetched to prevent one end of the close group being hammered due to Xorspace ordering. Along with the excessive connections and over-messaging, this is another probable cause of nodes chucking in the towel.
@Roland has been working on a test for verifying where any particular piece of data is on the network, and @Qi_ma is getting registers into the churn test, so we can see how these cope when things get wild. After that, we’ll be looking at refining our data retention tests, and turning our attention to DBCs.
With that in mind, @bochaco has refactored how the client chunks files during self encryption and pays for their storage. Previously we were chunking files twice (first to create the payment Merkle tree and then when uploading them). We now generate chunks and store them in a local temp folder when paying, and read from that temp folder in batches when uploading the paid chunks. This should reduce client’s memory footprint, especially for large files, as they no longer need to be stored in memory.
@Anselme has upgraded the faucet. The simple standalone file that sat on the local machine, is now an HTTP server that sends tokens to the addresses on the request. So, it’s self-service, and we no longer need to have one person who claims the Genesis key then doles out the tokens manually when people send their keys. That puts us in a good place for when we’re ready to start dishing out tokens to test the DBCs in future testnets.
Feel free to reply below with links to translations of this dev update and moderators will add them here:
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!