Short wee update this week as we work towards a new testnet.
On the AutoNat front, we’ve had a great candidate on the go, but a last minute bug has prevented us getting a testnet up for this today. @bzee and @angus are digging into the regression, and as soon as the source of this issue is found and we’ve a fix in, we’ll be unleashing a NAT detection testnet which should prevent unreachable nodes from joining the network (and thus give us some more realistic “churn” ).
Our churn tests for continuous integration have been much improved by @bochacho, and @qi.ma has been hard at work improving a custom data-replication algorithm and testing it against this. This new setup will mean that we can only republish relevant data on churn, which should be faster and leaner than libp2p’s shotgun approach, which requires republishing #AllTheThings every X time. If this is working well, we’ll likely move to a hybrid approach here, so we have faster, targeted event driven data republishing, backed up by longer intervals of periodic replication (at least until we’re deeper into specific Archival nodes, which may come down the line).
@anselme has almost finalised the DBC refactor, adding DBCs to the libp2p RecordStore, further simplifying data replication. He will be doing the same for Registers right after that.
We’ve also been improving the testnet tool, allowing AWS or Digital Ocean to be used to host nodes. And @chris is now starting in on some refactor work to simplify the release process (and disentangle our node/client code bases somewhat)!