Update 06 April, 2023

It’s time to pull together all those mysterious acronyms, revisitings and baffling announcements that have peppered the forum over the last couple of extraordinary weeks to give you - and us - a chance to reflect on what the hell’s going on.

General progress

The entire team has been heads down in libp2p, Kademlia and related topics this week, so there’s less to itemise than usual.

@bzee is focused on NAT traversal capabilities in libp2p for hole punching, @oetyng is currently integrating DBCs into the new system. He just finished a first round of improved documentation for DBCs, and clarifying naming and increasing type safety to reduce cognitive load. That’s also removed the bearer DBC concept and a ton of API + code for that. Next is to remove the Elder signatures from DBCs, as that won’t be used any more.

@roland is tidying up the code and digging into data republishing along with @chriso. And @bochaco has just gotten the chunk/register code ported across. Once republishing is in place we’ll be firing into some heavy internal testing.

@qi_ma is experimenting with the libp2p crate and playing with testnets from the stableset_net code base, investigating the chunk upload/fetch flow on top of that.

More about libp2p et al

Thus far, our plans for libp2p have emerged in bursts of excited chatter, so it’s time to put it all in one place to give a better idea about how it fits with DBCs, Sybil protection and all the rest.

First the why. In short, as we’ve modified DBCs and data to be backed by DBCs, that’s opened up the door to a simpler approach to the underlying network. And it’s this that allows us to start stripping out some of the more complex parts of the network, meaning a lot less for the team to worry about, which is a huge win. Our team is dedicated but small, and now they can focus on the value-add.

Libp2p itself is used by Filecoin, Eth and Avalanche which not only means there are lots of eyes on the code, but also gives us the opportunity for collaborations and interoperability down the road.

Libp2p now helps out with QUIC - the library that manages connections between nodes and one of the main struggles we were having with messaging. Perhaps equally importantly, it can handle hole punching. How it does that is quite complicated but it’s covered here allowing nodes that are behind a firewall or a home router to connect in a p2p fashion. Most people should be able to connect without having to mess around with port forwarding. Hopefully over time, the crate will improve to support even more routers than it does now.

Then there’s denial of service (DoS) protection, where it has controls to limit the number of active connections between nodes and, optionally, rate-limit inbound connections.

As many of you will know, Safe is based on the Kademlia distributed hash table (DHT) which allows XOR routing and content addressable storage. This is supported by libp2p too, if not quite out of the box, at least at a level we can work with and enhance. Importantly, it implements the Kademlia feature of refreshing nodes, meaning that dead nodes don’t remain in the routing table for very long.

This refreshing is important in view of another attack vector: people generating billions of keys offline then trying to brute force their way in as new Sybil nodes. As a test for this, we are looking at implementing a gatekeeper called verifiable random functions (VRF).

A VRF takes an existing key + input data and outputs a new public key + proof it was derived from the old key plus input data. So, the node that wants to join creates a key and finds the closest group of nodes to that key. The VRF takes the key plus the IDs of this group and outputs a new key pair plus a proof. This new public key plus the proof is then sent to that close group to join, whereupon the nodes in that group can check it was generated from a valid old key and the IDs of the nodes close to that old key.

Since nodes are churning rapidly (much faster now), keys must be generated and used quickly. They have a very limited lifespan, thus mitigating the ‘offline key generation’ attack. VRFs are an innovation by Algorand.

Next on the list of goodies is decentralised IP-based public key infrastructure (PKI), which provides protection against man-in-the-middle attacks. As mentioned a couple of weeks back, this effectively lets us use our group consensus mechanism as a way of becoming our own CA, certifying messages as secure.

Our path towards specialised nodes such as for archiving and audit suddenly becomes a lot simpler, as does the tricky problem of upgrading the protocol, with various libp2p functionalities helping us here.

So libp2b and other projects like rustls and VRFs take care of a lot of the hard low-level networking problems, meaning we can focus on the real innovation on top, notably our use of DBCs.

This is where Safe stands out with a massively parallel and scalable transaction throughput. Because Safe is a data network first and foremost, we use DBCs to secure the data as well as paying for storage, in this case with the recent addition of the “reason” field which stores the name of the data it paid for. The data also stores a link back to the DBC to complete the circle.

We also pass a lot of responsibility for security to the client, with client signatures over DBCs, and their verification securing all the data on the network. In doing so we have removed network sigs entirely, simplifying things and securing against crypto-cracking quantum computers when they come along.

For a UX unmatched by anything else currently out there, we have multisig functionality through BLS keys to make this a truly useful and usable system.

Add that to earlier innovations like self encryption of chunks and the picture is pretty much complete. This missing link of libp2p simplifies the offering into an easily explainable network and does so in a massively scalable fashion, whilst providing an incredibly high level of security with almost no historic state (apart from the DBC transactions).

This feels and looks like the Safe Network actually waking up now and shedding all the research parts and tests we have done to provide a small, robust node that will allow users to simply join the network and start earning and storing data almost immediately, regardless of how long the node will be online for. Everyone is going to get Secure Access to a new approach to humanity’s most precious element, knowledge.

Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!



I’d like to dedicate this first to @Josh


Thanks man, that gave me a notification to grab second. :rofl: :partying_face:


Third third third!!!


Failed to grab podium

I have been labore cum dignitatie

Now to rest my aching back and read.

There is no way that is not a truly major win. Fantastic.

verifiable random functions (VFR). VRF surely?

VFR is Visual Flight Rules – ie fly in daylight and avoid clouds whilst up to now many of us have been flying IMC Instrument Meterological Conditions -its dark and/or cloudy but doable with skill and training. Now it looks like everything is clearer and simplified for us all.



Beautiful update, as always thank you for the amazing work ! I gave up a while ago on setting up a node with my weird router, so can’t wait for future testnet with this hole punching stuff ! :facepunch:


Very good. I thought I’d post something for once to show that I still exist.


Ah Mr Runswick. We’ve been expecting you.


Thx 4 the update Maidsafe devs

Love the piecing together of innovation, but with every atom, love this part the most.

Keep hacking super ants


fantastic progress!!! great news!!! very happy with the direction we are heading.lets finnally do this.


Yes! And so too the world will awake with it… sublime update. Onward we go, but quickly now more than ever. Heroic work all!


A great idea @Runswick, and happy to see you there. I hereby warmly encourage all silent-readers to pop their heads up for a hello to signal their presence to add to the great excitement and momentum of the moment, if they so desire.

Well done to all at @maidsafe - fingers crossed for the coming weeks. Do remember to eat, sleep, go for a walk, take breaks, mountain bike, whatever you’re into. Such breakthrough moments must be terribly exciting to be a part of.


Not only that but DBC’s Backed by Data.

It is both, and what an interesting parallel with Gold (or Bitcoin) backed currency. This has been mentioned here much in the past but it is great to see it literally in the implementation. It couldn’t be more true.


…literally next sentence is:

:joy: :joy:


Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:


So hold on…

Node age, sections, 7 elders… Etc… - all the jazz worked on for last couple of years is now gone due to libp2p?

Sorry I’m trying to get it straight in my mind.

Massive simplification??



Not libp2p, but just using kademlia as we used to. Membership selection in a decentralised network is hard, very hard and when you get there you have created a target zone for attackers. They can attack the few nodes that make decisions.

So this swing is back to our true nature, back to the ant analogy if you like. So now transactions happen in several deterministic, but random points in the network. We use the hash(hash(x)) trick to ensure a group and its linked groups together only have responsibility for a single piece of data in common. Rather than a range of data between some nodes. So this is a big point.

Then we have sybil defences. these are two fold.

  1. Use a vrf mechanism. Basically this wiorks like this

Takes an existing key + input data and outputs a new pub key plus proof it was derived from the old key plus input data.Means you cannot just generate keys

For us. it’s like this

  • Node creates keys.

  • Gets closest nodes to those keys (this is the input data)

  • Generates new keys plus proof

  • Sends the new key to close group to join.

  • They check the VRF proof and new key validating it came from the old key plus input data

  • Our input data is close nodes to old key )so he has to do this fast)

2 The second sybil defence is like proof of stake, but in reverse ;–) Our view is POS does buy some protection but removes the ability for everyone to run a node. What about those with no money, we always wanted to empower them.

So we do.

In our system, nodes earn very quickly, tiny amounts, but fair amounts (the network decides). So the nodes build stake.

We have not implemented that part of the algorithm, bu tit is there for us to use if required.

So the massive simplification is do what we know, what we are good at, don’t try and control things, but use the randomness of the ant colony to create a secure sophisticated network packed full of very simple nodes.

Fully parallelise the transfer of money, no global order of transactions and that means this thing will handle huge amounts of transactions. Now we can have true micropayments for tiny amounts. That means the Safe network can spawn enormous amounts of innovation in true decentralised approaches to communications and data store.

Hope that helps.


It helps a lot. Thank you.


Just to be clear on the LibP2P thing here. Let me give you my take

Back in 2006 when Safe started I thought, OK the designs solid, folk may not understand it but it’s solid. Now I need a network lib to use and hopefully a kademlia lib. The latter not so important as we can write it easily as it’s just an algorithm. The network part though, man that’s messy and complex.

To my horror there was no such thing. Nobody had anything close to good there. NAT devices and hole punching were not even on the radar. That was a shock and meant there would be as much or more time creating that than doing the Safe network.

Juan Benet (ifs) had a very similar find. Same with the the etherium crowd (they ended up creating devp2p, remember linux only stuff etc.)

However file coin and etherium had something we did not.

  • A ton of cash, and I mean hundreds of millions
  • Access to network Engineers (or buy it)

So etherium created devp2p, ipfs and co worked on libp2p. A ton of work and I mean a ton.

I met Juan in SF a few years back and said, hey guy we need hole punch, we need stream mix, nodes need to work behind nats and we cannot have many connections out (socket use issues). At the time it was not a priority. So I dropped looking at libp2p.

However they have done amazing work there and I hope they still do. There are conferences for libp2p, eth adopted it now afaik and many other projects.

Part of the work there was a kad layer, but it was pretty inefficient and suffered issues we know about (like routing table poisoning due to stale nodes and older nodes being more poisonous due to that). So Libp2p was seeing 90% plus queries fail. They sorted that now AFAIK (we need to confirm, but we know how to sort it if needed and we can give back).

Then QUIC, we jumped on that 3 years ago, much to the chagrin of some of our devs who wanted tcp and all its problems. QUIC has stream mix, it can hole punch easier than tcp and much more. Libp2p very recently included quic and can now see that they can bin a ton of their tcp code if they wished as quic is the future here.

This was in the last few weeks that quic and kad seemed to work in libp2p (although their go impl is more advanced than the rust one right now).

So I digress a wee bit.

Back when we got rid of parsec and its limitations etc. as well as most of the team we were heading back to our roots. The ant approach. But we kinda had a working network stack and Quinn (quic) was promising to simplify that. The thought of rewriting kad was just too much, so we stuck with sections etc. (this was a mistake)

Recently classical consensus was slipping back into the network and that has had me on red alert, as it does not work. So in the last few months a ton of work has gone into looking to make the best of a bad direction, well not bad, just try to make what we had work. What we had was very complex and no engineer knew all the code or could understand all the code

During research into some IP based key exchange mechanism I noticed libp2p guys asking really important and deep questions in the quic crates in gihub. I was curious and went down a rabbit hole.

They had it, they had quic, they had kad. We could go back to post parsec and finish Safe with the real design. We had made DBCs work in the meantime and learned how to secure them and hence secure data. We had our kad tricks like recursive hash to ensure groups are not range based but single item based. We know how to chain those items. We have vrf functions that can prevent offline key attacks, we have a POS type approach if needed. We know how to reward nodes. We know how to detect faults and ignore nodes (we ignore instead of kill) and we know how to use BLS to have multisig with a manager or multisig with no master key holder.

So finding libp2p had come so far and given what we have, all the pieces literally fell into place quickly and when they did it was amazing. Right now as we code this up, the stuff we don’t need is incredible, the simplicity of the code/design is elegant and the speed of launch is now controllable for us.

I am sure libp2p will have bugs, I am certain of that, but I am also sure the cash behind it and the projects depending on it will all work to keep it working well.

So there you go, libp2p was a feature, but one of a few that allowed Safe to come alive.