Dang I was just about to say all that but you ninja’d me
Jk but thanks very much for these, you’re a great asset to the community knowledge pool!
This is a great and digestible insight into the networks routing logic. Thanks for this @mav. It actually made me smile a couple of times because when you actually absorb it you think to yourself, so simple, so clever, yet I would have never thought of it (especially me ) makes you appreciate the devs all the more. I’m glad you appreciate SAFE so much and am hoping you do another write up like this on Data Chains
Just reading, this happens
Too bad that I can’t get your crypto address, but these read ups are really worth crypto IMHO.
I do hope that you share these on Medium, Steemit, Reddit and other places so that you can get more readers
Superb work yet again @mav thank you very much.
Edit: can anybody elaborate on what groups do and how? For example, I think, but not saying this is correct… groups are used to reach concensus on operations related to data validation, farming events, etc. (I’d like a definitive list). So are there different kinds of group, orthogonal network structures for different vault personas for example? And how does the prefix relate to group operations if at all - for example, is the prefix used to determine which group looks after which chunk, based on the prefix of the hash of the chunk itself?
As you can see, a little knowledge is dangerous! So any clarification would be welcome!
Great read! I agree it should be shared on medium and make it into some sort of knowledge base for future reference. It is great to have this sort of details documented!
Would I be correct in asserting that the number of chunk copies direcrly correlates to group size or is there another dynamic at play? It sounds like data should be rather secure under regular operation, even without data chains (which is needed to cope with systemic system failures).
Yes, in fact there are different groups or Peers even in routing now (from our design talks).
- Infants :- Peers of age 4 or less, these do not affect churn (by extention age of other nodes, after network start).
- Adults :- Peers of age > 4 but not the oldest
group_sizepeers in the section.
- Elders :- These are the oldest
group_sizepeers in a section, these are the decision makers.
If you go back to my google talk in 2008 this was described. Then we have the peer groups in vaults as per the language of the network , this is possibly changing slightly with disjoint sections.
Yes, well it gives us the section the chunks will be in, likely now the elders will look after the chunk, i.e, be the
DataManagers of the data, but the
ManagedNodes will likely be the closest to the chunk as per that paper (it’s a small change but not flushed completely through yet (alpha 4). There may be small changes, but the responsibilities are still the same as they are easily calculable.
Number of chunks will be (in alpha 4) liely be 4. regardless of section size. Group size is a constant for security, section size is dynamic as @mav superbly highlighted and replicant size (or
DataManager size is constant).
We will publish a load more info soon in a new RFC to tie data chains and disjoint sections together from a routing perspective for alpha 3, then for alpha 4 we will show how that works with data and communications. It sounds a bit complex but it is actually likely to be much less code with less time durations (local) and magic numbers, caches etc. So although difficult to reason about in many ways the end result should be very natural, simple and effective with a high degree of efficiency.
What about a catastrophic event that wipes out millions of nodes
Good point about a subtle difference in terms between ‘group’ and ‘section’, and one that left me scratching my head a bit in the original post. The difference between ‘group’ and ‘section’ is a good one to understand since they’re present in two of the key technical phrases ‘close group consensus’ and ‘disjoint sections’.
Thanks for making this a bit clearer for me. Based on this quote, the original post should probably only be using the word ‘section’ and not the word ‘group’, right? I was never totally clear on the distinction. If anyone with an understanding of the difference between ‘group’ and ‘section’ and when to use one or the other could express it clearly I would greatly appreciate it.
Great article about the inner workings of the network. Just 2 remarks:
What you call group is now called section. See the change of terminology in MaidSafe Dev Update – December 6, 2016. A group is now a subset of a section of exactly 8 nodes. It contains nodes that are the closest ones to a specific address. A node belongs to many groups, but it belongs to only one section.
The minimum number of nodes to make a new section is 22 because, like you explain later, a section is split in two only if both halves are greater or equal than 11. Meaning that it splits when it reaches 22 if it is well balanced, but can grow above 22 if it is unbalanced. The margin (11 – 8 = 3) is a hysteresis factor added to avoid a merge quickly after a split.
So, in summary:
- A node belongs to one section
- A group is a subset of section
- A node belongs to one or more groups
- A section has one or more groups
- Groups have 8 nodes
- Sections have 8 or more nodes
- Routing is managed via sections
- Data is managed via groups
Is that about right?
Yes, I think that all these assertions are correct.
Amazing work @mav!! And thanks for all the replies and stuff. loads to learn here.
This is quite new to me, thanks for sharing.
This beast thing you are building really is autonomous! Can’t wait to see it up and running.
Taking @Traktion’s summary of characteristics for sections and groups, can somebody explain how they come about / relate to each other, and how they work in relation to specific network functions.
From David’s clarification:
- 4 copies of each chunk (in alpha 4)
- there are different groups or (of?) Peers in routing, as follows…
- different groups of(?) Peers in vaults (see David’s reply again)
- the prefix of the hash of a chunk matches the prefix of the section looking after it, and the Elders of that section will act as
DataManagerslooking after the chunk
- the (four)
ManagedNodes(vaults) holding a copy of the chunk will be the closest to the chunk (in xor-distance between node id and chunk hash)
So to begin trying to answer my own question
DataManagergroup of the section to which a chunk belongs are responsible for voting /group concensus in relation to policing the activity of the group of nodes (vaults) holding a copy of the chunk
DataManagernodes police each other, by rejecting any of their members who don’t perform in unison sufficiently well (eg too slow to vote, or vote against the majority etc)
@Mav, this is splendid
I have a question:
The first group (section) that is created, is that an empty string?
When I re-implemented this I got empty string added, so I put condition to always make the longestPrefix be at least first position of name (inserted at line 67 in your code).
I got exactly the same results, except that I had 20,242 splits instead of 20,243.
Not sure if this is intended, reflecting the original logic, or if I did some mistake.
Yes, it should be. From RFC-0037 Disjoint Sections
When the network is bootstrapped, there is only one section, with the empty prefix S(), responsible for the whole name space.
So it looks like the difference in splits may be due to missing the first split from prefix “” to prefixes “0” and “1”.
Thanks for this. I’ve updated the post (which has now moved to safe-network-explained.github.io). Having correct terms is important to me (eg ‘maidsafe’ and ‘safe network’ being used interchangeably a few years ago was a bother)
Indeed. I feel that 8 vs 11 is a better comparison since it illustrates the ‘buffer’ idea more clearly than 8 vs 22. But I’ve updated the text for this to indicate 11 is the new section size, not the size before splitting.
Great point. I’m going to add a bit more to the post explaining this explicitly since this clever mechanism isn’t actually stated (only implied). Thanks for pointing this out.
Great stuff! They URL should be the starting point for many people coming into this project and trying to understand it. Loving your work!
Question 1: Referencing https://safe-network-explained.github.io/architecture, specifically the “Churning” section, is there any idea how often this intermittent event would happen?
Question 2: Also, It seems as though because of this intermittent “Churning” the smaller sized vaults would be more favorable by vault owners over larger ones (aka a my spare 100GBdrive over my 24TB array) as it would be quite time consuming to relocate 24TB worth of data when churning occurs on a single vault. Am I correct in my assumption?