Update 26 January, 2023

This week we revisit node age and look at some tweaks to leverage it for more network operations. To head off any wailing and gnashing of teeth, don’t worry, it’s not the sort of architectural revamp that’s going to take months. It builds on what’s already there in terms of new nodes having to prove themselves and move around the network, but it takes away critical data handling duties from those young nodes and restricts them to those that have already proved themselves.

General progress

After lengthy and occasionally heated community discussions, @JimCollinson and @andrew have got the spreadsheets out again and have worked through the various options for token distribution. We sincerely hope this provides the basis to move forward on this now.

@joshuef has been experimenting with testnets with even tinier virtual machines and small nodes. It’s been going pretty well but there have been a few bugs that look to be round the DKG (elders voting) process where sometimes votes aren’t received. Related to that, @anselme, @maqi and @davidrusu are taking a close look at DKG, what exactly triggers it, including looking into SAP generation (a new record of elders that is created every time there’s churn) and exactly where that triggers a DKG round.

@oetyng has simplified the join process, by moving it into the regular msg flows. After that, the relocation flow was simplified by making it also be a join, but to another section and including a relocation proof. @davidrusu found a potential need for asserting that a valid churn event was used, this work is coming up.

@bochaco has been debugging and finalising sn_comms, the communications module, which he is continuing to refactor.

And Mostafa has finished testing the consensus algorithm and added it to the main repo.

Thanks to @southside for suggesting the ChatGTP code commentary initiative. Anyone who wants to help out there (no tech skills required) should check out this post.

Node age and data

Responsibilities in the network are based on the notion of node age.

The node age, does not increase linearly, but exponentially, which means that every increase of age is based on 2x of what the previous increment was based on.
Time in the network is measured in number of events, and the measurement is approximate as we are doing a probabilistic evaluation.
So, age A happens after ~n events, and age A+1 happens after ~2n events.

The reason node age is measured in this way, comes from the empirical observation that nodes who have stayed online x time, is likely to stay online for at least another x time. So, if you have spent time t in the network, it is likely that your total time in the network, will in the end be at least 2t.

What this means is simply that the younger the node the more likely that it will go offline, and the older the more likely that it will stay online.

Having very stable and very unstable nodes both storing live data, as we do now, is hard to manage when there is lots of churn. If a node goes offline, its data must be transferred to the next XOR-nearest candidate, which takes time. New nodes are not reliable and can go offline rapidly, meaning lots of data movement and a headache for the elders who have to manage it.

Primary and secondary storage

We’re looking at concepts around a stable set of nodes (more on that to come down the line). But one idea this gives us is separating nodes into two storage tiers based upon age and (therefor) likelihood of churning, and thus giving them different duties.

For example, we’d want the most stable nodes (say, age 10+) to be responsible for primary data storage. These nodes look after the data and give it up to a client on request. They are not likely to churn any time soon.

Nodes outside of such a stable set, those that are still working on increasing their node age, hold extra copies of data (secondary storage). In doing so, they provide redundancy to support the stable set.

Their behaviour in handling this data is also used to evaluate their quality in the usual way. But since they only hold extra copies they do not need to be tracked so closely by the elders, and can fail without causing serious problems to the network or requiring mass data migration.

This would allow for us to have an increased replication count for data, while testing out incoming nodes more thoroughly, without sacrificing data stability to do so. All by levering our existing node age system. :tada:

Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!


First! :wink: Lucky

Great update team - much happening!

Thanks :bowing_man:


Second now to read this dimomd


Can I ask where datamaps are visible?.. rather surprised that xorurl and dog are not obviously including a simple sense of what is happening and why… I wonder that might help understanding alongside being useful for app devs. Searching give a sense of those but unclear without a test network more what multimaps and nrsmaps are to be sure of. Useful later some end to end description with the cli and.or api options to help evidence and track the process of what is expected.

Thanks again to all for input and perseverance on hammering hard topics… looks like good progress again this week :smiley:


Third and congrats to this amazing team! (Oopss, David that wasnt very kind of you…)


Thanks so much to the entire Maidsafe team for all of your hard work! :racehorse:


We’re looking at concepts around a stable set of nodes

…isnt this a bit “late”? I mean this is “basic” (theoric?) important concept for functional network. I mean it affect on me that you maybe programming something that mby woudlnt work and then finding “basic” problems which could? be solved at first.

I just ask it isnt offensive post, rather maybe my prespective for consideration.


I always thanks to you team. Great work.


Thanks to all for the hard work to get us this far. I hope we can now get the token distribution put to bed. There have been a lot of changes so its up to us as a community to “trust but verify” the devs and do as much testing as we can to validate their work. We need further discussion on how the test net regulars can contribute most effectively here and how we can entice others to join in this testing. We just got a new release so - for a while- it will be relatively simple to join in as there will be no need to install rust and build from source.

However - even if you are not comfortable with joining the test nets you can play your part by checking out the ChatGPT documentation project where non-coders can make a real contribution to the project by using ChatGPT^ to generate comments on all of the source code in the various repos at MaidSafe · GitHub - eventually…

This needs a project leader - for many reasons that should not be me - their role will be to co-ordinate a few users to quickly learn enough of ChatGPT and its API - links in the original post - to find the correct phraseology to get ChatGPT to produce comments at an agreed level of detail. Once that is done, the work needs split into manageable chunks for a team of helpers to process and then collate into a sutitable form - and in various languages.

If this sounds vague, thats cos it is, I dont want to pre-empt whoever takes this on as leader - I just think using these newly available tools to produce badly needed docs is a good idea. It needs a bit of planning and a lot of slog- which can be minimised by intelligent use of the ChatGPT API, followed by collating all the output and ensuring eventually we have coverage of all relevant repos - and a process to keep these docs current.

I know just enough to know what I dont know and where others have better skills than me so instead of me kicking this off and then calling for help when I get out of my depth its best to find someone else to lead it from the start.
I will contribute when I can but I have lots on my plate right now.

Having adequate documentation is a big plus not only for our own use but in promoting the project and bringing in other dev talent. If we are going to make SAFE a success then as well as a working network we need visibility. Good documentation is key to that AND helping to provide these docs is now longer the preserve of pure Rust geeks - ChatGPT seems to have liberated us from that so lets make the fullest use of it.

You want to help move SAFE forward but feel you dont know enough/any Rust to be useful?

Here is your chance - either as the project leader or as one who will run the scripts against the source files and help collate output.

YOU can make a real difference here. Please check out the links and consider helping.
Thanks to all who have already indicated a willingness to assist here.

^ other AI tools are available and their use should be considered also.


Yeah. When looking back, I always think like this. But the only way is forward.

And ‘the way forward’ includes trying to see things as clearly as possible, discern what are the fundamental problems to solve.

It seems though, as when looking back, one will always see that too much time was spent on irrelevant things, until what was really needing focus emerged.

Sounds like the story of my life btw…


Never heard of such a thing around these parts :rofl:


For those who have no teeth, dentures will be provided so they dont miss out on the gnashing.


It’s never late to make improvements, and this could be one. It’s not a blocker but an interesting view of the stability and simplicity of handling membership.

All good


It also makes sense to start simple and then refine. Hindsight, especially from outside is a poor kind of critique.


There was a similar line by the late great Dave Allen if I remember correctly


All my work which is not derivative was stolen from others.

Shamelessly :slight_smile:


Is this ‘~n’ events from 0 event and the ‘~2n’ from 0 event? IE ‘age A’ after ‘~n’ events and ‘age A+1’ after another ‘~n’ events

OR as I expect after another ‘~2n’ events.

But that sentence seems to say its ageing up after every ‘~n’ events and next paragraph can also be read as both meanings being true too.


Yes, I would expect the same: age A happens after n events since age A-1 and age A+1 happens after 2n events since age A.

To put it differently: age = log2(number of events since origin)


Right before it, it says this.

Which should clarify what is meant below.
For additional clarity, this could have been written:
So, age A happens after ~n events, and age A+1 happens after [an additional] ~2n events.

All below that is as is meant.


I would say that node age increases logarithmically (as a function of the number of events).