We declare NatNet a definite success! As far as we can tell, all home nodes operating behind NAT were successfully detected and shut down, mitigating the problems we were experiencing earlier where the network would try to communicate with unreachable nodes. Now we are confident in detecting nodes behind a NAT, the next step on that front would be allowing them to join via hole punching and UDP/Quic. NatNet was TCP only.
Alongside that though (NAT traversal still needs a bit of work, as we’ve mentioned before it’s quite basic in
libp2p so this may take some time), there’s plenty afoot.
We’re taking an initial gander at provider nodes that can perform tasks such as archiving. If you remember, with
libp2p we can treat certain nodes as service providers to perform special functions like archiving.
The other area we’ve returned to is the matter of node sizing (and trying to benchmark replication flows). How small is small? Are 1,000 small nodes better than one large one of the same capacity? What’s the difference when we have massive churn? What are the tradeoffs? We’re running some preliminary tests now.
@anselme has adapted the spendbook to hold both double spend entries instead of just one. They can then be more easily dealt with. This is on top of the recent merge of work getting DBCs into the
RecordStore, which means they’ll be automatically replicated alongside chunks (only registers left to sort there).
@bochaco is working on serialising and sending payment proofs to nodes, trying various methods to keep things light.
@joshuef has been looking at the advantages and limitations of having multiple nodes per machine and options there. So far, with no optimisations, 10 nodes per Digital Ocean droplet run reasonably well (albeit with no churn), though doubling that number slows everything right down. This should allow us to have many, many more nodes in upcoming testnets though!
Thanks to input from the DiskNet and later internal testing, @roland is implementing a RecordHeader and validating the records before we store them. This also neatly allows us to separate the address space between our base data types (chunk/DBC/register) and have some custom processing there (merging register CRDT ops, for example).
@qi_ma is investigating a connection closed during data transmission issue. This may be related to an RPC address being used for data transmission when it shouldn’t be. If so, this may well be the root cause of some of the connection errors we’re seeing, as well as related issues where connections can also get closed as when dialing a peer it dials more than one of its addresses. @bzee has been digging in there.
Away from the code @jimcollinson is once again heavily involved in market research and launch planning. He and @andrew.james are keenly examining methods to ensure smooth economic transitions during the initial stages of the Network, with a particular focus on liquidity. Now that the Foundation is successfully operating in Switzerland, this process is much simpler. Andrew is also liaising with Swiss auditors to discuss suitable accounting structures.
So no new testnet yet. But a busy time nonetheless!
Feel free to reply below with links to translations of this dev update and moderators will add them here:
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!