Anyone in the mood for a deeper dive?
With the Network’s tech stack now moving on at a pace that we couldn’t foresee even a fortnight ago, we’ve given ourselves the latitude and flexibility to pursue a variety of avenues for launch, be that a payments only network, or striking out for the full data network from the get-go.
So @oetyng is revisiting the world of payments to explain how it is being integrated with storage, wallets and client APIs in the new simplified design.
First though a word on how the network is shaping up as we move from the sections design to Kademlia. Kademlia requires that nodes speak to each other all across the network, rather than just those in their section. If they didn’t, the routing table would soon become stale and full of dead nodes and the network would die from a bad case of miscommunication. Checking DBC parents is a way of constantly ensuring nodes are live while also fulfilling a vital function, so we can double up there.
Transactions will happen very fast across the network, but Safe is not a database and not suited for rapid data mutations. That type of work is done client side, with the results then stored/replicated on Safe. CRDTs mean that clients can collaborate effectively, in real time and in a conflict free fashion, and the results of those collaborations can be stored publicly or privately on Safe for posterity. The move to Kademlia clarifies this division of labour.
Another upshot of the change is that small testnets are unlikely to be stable. The new design will likely need 2,000 or more nodes to function properly. This is not a problem at all, but will obviously change the way we run testnets.
As we edge towards being able to release something for public consumption, Jim is working on positioning, roadmaps, and messaging. Take a look at his thread What is launch if you haven’t already.
With that day in sight, @bzee has been digging into NAT traversal and how we can implement it, and @bochaco is readying our data types for integration into the new network infrastructure, including working on APIs to let developers get going on apps as soon as we have something stable. Bochaco is also looking at error messages, seeing where we can make them more specific and helpful.
The other big bit of work is sorting out close groups. Given the name of a chunk of data (same as its XOR address) the client knows which k nodes to ask to retrieve it, but should we start with 20, or something more like 8? More means that more nodes update their routing tables, but it’s also more messages. What about when we’re storing DBCs? This is something @roland and @qi_ma are occupying themselves with. @roland is also revisiting data replication.
Of course, we need to be able to see what’s going on, and @Chriso has refactored the OpenSearch implementation in the testnet repo to make way for the monitoring setup.
And @oetying is starting to join all this stuff together. More from him below!
The past couple of weeks, the DBC code underwent a long needed simplification. There was both more documentation and a small overhaul of the terminology, aligning naming and concepts for less cognitive load.
Another thing we could do now, as we no longer have sections and Elders, was to remove the network signing of DBCs. That by its own is a massive simplification. What we have now is that a transfer can be built completely offline, and all you need to do is to send in the parts of the DBC, called signed spends (that means the key corresponding to a DBC id, signed the spend of that DBC - so, the sender of tokens signed the spend). When enough peers in the network have accepted these signed spends, the transaction is completed and valid.
So, to step back a bit.
Instead of asking the Elders of a section to validate and sign a spend (i.e. a transfer or data payment), as was previously done, we now send in client signed spends to the close group of the id of the DBC to spend. (We call it the address. So chunks have an address, registers have an address, and DBC spends have an address). This close group has been mentioned before. The peers of the close group don’t sign anything. They just validate the spend, each of them acting independently, and store it if they find it to be valid. Part of finding it valid, is to look up the parents of such a spend. That means looking up the spends that led to the creation of this very DBC now being spent.
If those parent spends can be retrieved from their respective close group, that means they are valid, as they themselves underwent this process back when they were spent.
There are of course more validations, such as the blinded amounts checking out for inputs and outputs, that the keys of the input dbcs have signed it all properly, and so on.
And when that has all been confirmed as valid, a peer simply stores that signed spend. And when a client is to check if the DBC they got is valid, they just ask the network to return the signed spends that led to the creation of that DBC. If the required number of peers returns it, then it is a fact.
So, a close group. Remember those are found by closeness to the id of the DBC (or the hash of data). They can be 4, 8, 16 peers, or more. We will start with a number and tweak as we progress. But there is more to it. We can add layers by hashing the id/name, and get a new address which is deterministically derived from the first one. Now we can store the same item to this group as well. And so it can go on, hashing that address again, and again, and for every time have yet another group holding the item.
These are some of the levers that have been mentioned before. We will start out simple with a single group, because first we just want to get this up and running in a testnet.
We have just now finished the first simple implementation of a wallet that holds DBCs and a history of transactions. As with data, the first iteration uses in-memory storage. Next it will store it locally to the drive, and after that we’ll do network storage. The good thing is that we are implementing the wallet to have a simple interface that can be used with any of the three (or all) of the above mentioned storage media.
What has also been taken into consideration this time around, is that all peers will need to have a wallet for their rewards. But we’re not quite there yet, a week or two more for that. (And yes, now it’s much simpler to make time estimates.)
These were described very recently in an update, and not much has changed there. The payments go to nodes, and the priority queue (previously called mempool) is the same.
What we will probably be looking at a bit sooner, is how the same pattern can be applied for data payments.
And that’s you up to speed on the progress of transfers and payment for now.
Feel free to reply below with links to translations of this dev update and moderators will add them here:
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!