Update 13 April, 2023

Anyone in the mood for a deeper dive?

A couple of weeks back, @oetyng gave us an update on designs for a payments network and how that might function standing alone.

With the Network’s tech stack now moving on at a pace that we couldn’t foresee even a fortnight ago, we’ve given ourselves the latitude and flexibility to pursue a variety of avenues for launch, be that a payments only network, or striking out for the full data network from the get-go.

So @oetyng is revisiting the world of payments to explain how it is being integrated with storage, wallets and client APIs in the new simplified design.

First though a word on how the network is shaping up as we move from the sections design to Kademlia. Kademlia requires that nodes speak to each other all across the network, rather than just those in their section. If they didn’t, the routing table would soon become stale and full of dead nodes and the network would die from a bad case of miscommunication. Checking DBC parents is a way of constantly ensuring nodes are live while also fulfilling a vital function, so we can double up there.

Transactions will happen very fast across the network, but Safe is not a database and not suited for rapid data mutations. That type of work is done client side, with the results then stored/replicated on Safe. CRDTs mean that clients can collaborate effectively, in real time and in a conflict free fashion, and the results of those collaborations can be stored publicly or privately on Safe for posterity. The move to Kademlia clarifies this division of labour.

Another upshot of the change is that small testnets are unlikely to be stable. The new design will likely need 2,000 or more nodes to function properly. This is not a problem at all, but will obviously change the way we run testnets.

General progress

As we edge towards being able to release something for public consumption, Jim is working on positioning, roadmaps, and messaging. Take a look at his thread What is launch if you haven’t already.

With that day in sight, @bzee has been digging into NAT traversal and how we can implement it, and @bochaco is readying our data types for integration into the new network infrastructure, including working on APIs to let developers get going on apps as soon as we have something stable. Bochaco is also looking at error messages, seeing where we can make them more specific and helpful.

The other big bit of work is sorting out close groups. Given the name of a chunk of data (same as its XOR address) the client knows which k nodes to ask to retrieve it, but should we start with 20, or something more like 8? More means that more nodes update their routing tables, but it’s also more messages. What about when we’re storing DBCs? This is something @roland and @qi_ma are occupying themselves with. @roland is also revisiting data replication.

Of course, we need to be able to see what’s going on, and @Chriso has refactored the OpenSearch implementation in the testnet repo to make way for the monitoring setup.

And @oetying is starting to join all this stuff together. More from him below!

DBCs, transfers and wallets

DBC code simplification

The past couple of weeks, the DBC code underwent a long needed simplification. There was both more documentation and a small overhaul of the terminology, aligning naming and concepts for less cognitive load.

Another thing we could do now, as we no longer have sections and Elders, was to remove the network signing of DBCs. That by its own is a massive simplification. What we have now is that a transfer can be built completely offline, and all you need to do is to send in the parts of the DBC, called signed spends (that means the key corresponding to a DBC id, signed the spend of that DBC - so, the sender of tokens signed the spend). When enough peers in the network have accepted these signed spends, the transaction is completed and valid.

Spending DBCs in the network (transfers or data payments)

So, to step back a bit.

Instead of asking the Elders of a section to validate and sign a spend (i.e. a transfer or data payment), as was previously done, we now send in client signed spends to the close group of the id of the DBC to spend. (We call it the address. So chunks have an address, registers have an address, and DBC spends have an address). This close group has been mentioned before. The peers of the close group don’t sign anything. They just validate the spend, each of them acting independently, and store it if they find it to be valid. Part of finding it valid, is to look up the parents of such a spend. That means looking up the spends that led to the creation of this very DBC now being spent.

If those parent spends can be retrieved from their respective close group, that means they are valid, as they themselves underwent this process back when they were spent.

There are of course more validations, such as the blinded amounts checking out for inputs and outputs, that the keys of the input dbcs have signed it all properly, and so on.

And when that has all been confirmed as valid, a peer simply stores that signed spend. And when a client is to check if the DBC they got is valid, they just ask the network to return the signed spends that led to the creation of that DBC. If the required number of peers returns it, then it is a fact.

Close groups

So, a close group. Remember those are found by closeness to the id of the DBC (or the hash of data). They can be 4, 8, 16 peers, or more. We will start with a number and tweak as we progress. But there is more to it. We can add layers by hashing the id/name, and get a new address which is deterministically derived from the first one. Now we can store the same item to this group as well. And so it can go on, hashing that address again, and again, and for every time have yet another group holding the item.

These are some of the levers that have been mentioned before. We will start out simple with a single group, because first we just want to get this up and running in a testnet.

Wallets

We have just now finished the first simple implementation of a wallet that holds DBCs and a history of transactions. As with data, the first iteration uses in-memory storage. Next it will store it locally to the drive, and after that we’ll do network storage. The good thing is that we are implementing the wallet to have a simple interface that can be used with any of the three (or all) of the above mentioned storage media.

What has also been taken into consideration this time around, is that all peers will need to have a wallet for their rewards. But we’re not quite there yet, a week or two more for that. (And yes, now it’s much simpler to make time estimates.)

Transfer fees

These were described very recently in an update, and not much has changed there. The payments go to nodes, and the priority queue (previously called mempool) is the same.
What we will probably be looking at a bit sooner, is how the same pattern can be applied for data payments.

And that’s you up to speed on the progress of transfers and payment for now.


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

48 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

23 Likes

Podium !!!

18 Likes

Third I’ll take bronse now can’t wait to read

17 Likes

Nice. I didn’t think I’d get to read it tonight. What a difference a couple of weeks makes - the project is on a whole new level.

Thanks to all the worker ants and those supporting them.

The steps ahead have never been more clear. March onward to victory ants!

Cheers

21 Likes

So whats new? :rofl: :rofl: :rofl: :rofl: :rofl: :rofl:

No?

We struggled previously to get >10 humans to test on comnets, does this imply an overwhelming no of Maidsafe Digital Ocean nodes for JoshNet?

Should I continue working on a Joshtoshi faucet to distribute test tokens for JoshNet?

9 Likes

We can run many nodes on a single machine now from what I understand.

13 Likes

And David has promised I will get a node/nodes online :crazy_face:

11 Likes

I have a Pi3 with your name on it, ready to get a suitable image slapped on it and stuck in the post to you :slight_smile:

It used to have @Neik’s name on it - thank him :slight_smile:

10 Likes

Left out the double spends part I see…

(It’s my fault the update is late today, I had an errand and forgot that I was supposed to do this write-up, so it went in right when I got back home.)

Anyway, so part of the validation of spends, is to check if there are previous spends with same id, but different transaction.

If such a case is detected the dbc is marked as a double spend, and it’s dead. The tokens are lost.
This find (the two signed spends with different content) is sent to all the peers in the close group. They will be able to validate the double spend attempt and kill the dbc. It is forever lost.

The full scope of how that works, and why it works I think we’ll have to cover in another update :slight_smile:

22 Likes

There is absolutely NO requirement to wait a week between updates, none at all. Just sayin’ :slight_smile:

10 Likes

This little snippet here has me salivating, are we allowed to ask when now :laughing:

10 Likes

I have a pi4 now on your request because you insisted I stopped using windows :joy: thanks for the offer though :+1:t2:

3 Likes

Great stuff team. I’m seeing issues turn to PRs and then to merges at an incredible rate. :clap:t3:

Is there a difference between peers and nodes? These different terms are confusing me.

I see no mention of spentbook or re-issuing a DBC to prevent double spend. Is there a fundamental difference here or are those aspects essentially the same?

It sounds similar, as in there is a trail back to a genesis DBC using the algorithm you describe (checking the validity of each input to a DBC) but sounds like it might be fundamentally different.

Thanks again everyone.

EDIT: Ah, thanks @oetyng no need to repeat, I see you added some more detail which suggests there are fundamental differences and I’m happy to wait until this can be explained in full.

15 Likes

And clients? Seeing the three of these clarified in relation to each other would be nice.

Anyway - enjoyed reading through the parts about Kademlia and largely following along :slight_smile: my plan to (very thoroughly) test the claim that the network is “understandable” is going well so far. Thanks for the great writeups (this update and the last few)

11 Likes

Guess:

  • node is as it always was
  • peers: members of the same close group
  • client: any app (including the CLI) using the SN API
12 Likes

I forgot what today was! Lucky me, time to read.

Edit: by golly they’re really doing it. It feels real again! YEEEEEEESSSSSS!!! :metal:t2:

12 Likes

No, we are fixing that. A peer thought really can also be a client, but we are looking to make this more clear. Even some of our PRs have slipped up on it, but we will get there. It’s likely ask kademlia nodes we will refer to as peers, just keeps it easy as libp2p uses the peer nomenclature

The client actually does the re-issue when it signs a DBC. So when it’s stored on enough nodes (there we go again :smiley: ) it can tell those it is giving the DBC to that it’s done. They can check by retrieving it.

Each node that stores a DBC will

  • Check it’s validly signed
  • check the bulletproof (i.e. inputs == outputs and no output is <0)
  • Check the Input keys parents to make sure it cam from a valid DBC…

[EIDT} I should add the whole network is the spent book. So to check a DBC Parent we query the close group to the parent. We are checking, most nodes have it and also that it is unique.

A big difference we do here is when we find a non unique DBC we kill it by making it Unspendable. This means anyone who signs a DBC to 2 different output sets gets caught. So the doublespend is not creating and storing 2 DBCs it’s being able to spend BOTH of the conflicting outputs. So we catch the attempt and make it unspendable.

That means at most it was spent exactly once, but if a bad actor really wanted to try a doublspend ti would be quite concurrent and that way it’s likely he does not get to spend any output.

So rather than consensus which say, oh dearie me you tried to doublespend, we will just ignore one of them, we do the opposite and kill the money. I like this approach a lot as a doublespend attempt is certainly bad behaviour and should be punished.

Yes there still is a route back to genesis. So we can audit the supply.

16 Likes

That answered the question I was typing up to @oetyng post, thank you.

One thought though, I can see an attack vector here and not sure how it can be avoided, but if an attacker puts out a “ubeaut wallet app” that works great at the beginning providing features, but after a certain date (uses PC clock) causes a massive rate of double spends. This attack would take time before enough people were aware of it. And of course if it did it randomly then harder to track down to wallet app.

The consequence of this is of course people blaming the network at first reducing confidence, but more importantly causing the loss of a large amount of token being destroyed.

And rinse and repeat every so often.

Would be nice if there was a way to issue retractions so that the tokens can be recovered cleanly. Even if it took time to retract the double spends.

11 Likes