SAFE Network Dev Update - July 25, 2019

I really like the video @JimCollinson (anyone else involved I should tag?). It’s got a chill and natural style, while feeling absolutely professional. Just about right length as well.

Looking forward to the coming ones (I heard you allude to them, did I not? :wink: ) showing wallet use / shopping cart, webids (I would love to see some real weight in it, like don’t be afraid of going into the dark zone of oppressive regimes or what not, well if you feel like it, it could be heavy), etc. If you can make a series covering these other aspects as well that would be amazing material to share with people.

10 Likes

Application needs timing because it is business.

Software frame work only needs overwhelming tech.

P.S Google cant make Linux, and Samsung even cant make Android. The number of dev is not core thing for software frame.

3 Likes

I have big expectations with this project. From the heart is the internet that I want for myself and for future generations. :grin:

24 Likes

Thanks! Yeah, I do plan to make more. There is a lot to be said.

No schedule for them though, as they have to fit in around the ‘day job’ of building the thing! But yeah, I’d love to do more, and perhaps go a little deeper, but still keep things non-technical.

17 Likes

Hey folks, wasn’t sure where to put this but was wondering what’s up with merging? I’m assuming this is referring to section merges, correct?

6 Likes

The network will not merge sections now. With the design of Elders and many adults per section we would need to lose nearly 90% or more of data for a merge to occur. So we don’t need to merge sections now, unless we lost all that data and that is end of network and breaks fundamentals. It also allows a linear progression of prefix management whereas merge would require a more complex mechanism for holding section history and so on. So the merge code (a nightmare) is not required or cannot be used. It allows us to move faster as well not having to do that.

tl;dr the network can lose over 80-90% of nodes and not have to merge, but a loss of that proportion means significant data loss and that would be the end of the network. Later with archive nodes this may alter a bit, but only if it can be altered and still protect data.

14 Likes

atm we are working with 200, but that is likely to change a bit in tests (I suspect 120) . So 7 Elders and all the rest are Adults.

11 Likes

I’ll take the opportunity to ask a couple of things related to this, only if you have time of course.

  • With current information, the live network goal would be 200, and 120 for tests, or is the idea (right now) that 120 might be the aim for live as well?

  • 7 elders, is that with the 200 or 120 aim?

And now a couple of more difficult questions I believe, any sort of answer is appreciated.

  • Is there some rough idea what the median section size would be (with any of those aims)?

  • How does the elder ratio tend, is it constant throughout the section size growth, or any other type of behaviour?

  • Elder membership changes should be a function of the rate of change in elder population, which I assume also is a function of the rate of change of general population. But there is also churn, which is (possibly?) based on other events than membership, but in the end regardless of event type will be somehow related to growth rate as well as general activity, I guess. The question: Is there some way we can conceptualise the BLS key update rate? I try to figure out how it can be modelled, and any insight into how we can frame the variations would be awesome.

Greatly appreciate it.

4 Likes

120 for live as well, over 60 per section puts us in sybil safe territory

Yes 7 in total.

Not sure I get this, the sections will go from 60 -> 120 split -> 60 -> 120 etc.

Constant

Section keys will only be affected by Elder churn. Elders are the oldest so churn the least if that helps. Every Elder churn requires a new BLS key signed by the last BLS key. THere is a small chain of such keys. We only need to keep the longest chain that represents the oldest section we know of that we have not updated yet (see secure message delivery rfc for that).

Hope that helps, it’s dev update day :slight_smile:

9 Likes

How many of the probably starting 60 (after split) vaults can be lost (without replacement) before there would be problems? E.g. can it go to 20?

1 Like

Depends on how much data that section has really. As vaults fill up new ones are attracted to the network. This is why we want to push to beta ASAP and tweak/Analise this kind of thing. Also archive nodes make a huge difference. Fro launch we should be OK though if safecoin works as intended.

5 Likes

Any progress on a segmentation of the internet (eg country goes “offline”) and what happens if there is a sudden loss of say 3 elders?

2 Likes

Isn’t it more “beautiful” to have binary divisible section sizes? 256 instead of 200, 128 instead of 120, 64 instead of 60, etc…

3 Likes

60 and 120 have their own beauty because they are divisible by 1, 2, 3, 4, 5 and 6.

10 Likes

Can’t beat the Babylonians.

The Babylonian system of mathematics was a sexagesimal (base 60) numeral system. From this we derive the modern day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 degrees in a circle.[10] The Babylonians were able to make great advances in mathematics for two reasons. Firstly, the number 60 is a superior highly composite number, having factors of 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 (including those that are themselves composite), facilitating calculations with fractions. Additionally, unlike the Egyptians and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values (much as, in our base ten system, 734 = 7×100 + 3×10 + 4×1).[11]

14 Likes

Thanks David

I’d have to check this out, what median section size would be for various growth rates. I guess that’s the main variable for it.

Ok, I know these things it was not quite what I was wondering. I suspect there is no easy answer, but asking just in case you might have some idea I missed. So, I’m trying to figure out what the BLS key update rate curve looks like.
So the reason is that I try grasp what the inter-section communication ramifications look like. What is the rate for various conditions in the network, what can we expect see etc.
My initial feeling is that this is not so easy to determine, but what ever thoughts or clues you might have would be great.
I am on the beach now but will try lay out my initial idea for what is necessary to determine the BLS key update rates later.

Anyway, thanks a lot!

4 Likes

Unless that was meant to be a snarky comment I still don’t follow you. Section prefixes are naturally expressed in binary. For example, the ID’s of nodes in a section of size 128 will sort in natural order to 6 addition bits after the prefix. Easy to form section subgroups as well. Binary seems like the obvious choice here, was hoping for some clarification…

@Zoki’s answer is more reasonable.

1 Like

OK, so for the specifics that I used, the median count didn’t change much, even when growth rate changed, which was a bit surprising.
Relocations were simplified in the simulation, not attempting to resemble the actual implementation (which I have also worked with when I translated mav’s google attack sims, but thought it overkill here). I think the principle remains even with 100% accurately modeled relocations.

Simulation specifics

Adding is done to a random section. Sections split when at 120 agents (the more general label agent is used instead of vault).
One relocation in the network is done for every new agent that joins. No leaving of agents was simulated.
Every relocation happens by choosing a random section. Then a random group out of that section.
A random upper limit of age is chosen, then a random value between zero and that upper limit - the ageLimit.
First agent in the group, ordered by age, that has age less than the ageLimit, is relocated.
It is relocated to a random section within the third of totals, that has the fewest agents. So, adding this preference of relocation to sections that have fewer agents, increased the median count slightly (a couple of extra agents, so not much).

Results

With this, I observed median agent count per section ranging from 83 to 89, over a span of 5300 days and an initial agent count of 101000 agents, and end agent count of 682251 (so, quite low growth rate compared to what we aim at, but that’s besides the point).

Median agent count of 89 was first observed at an annual growth rate of 91 % (which might sound high, but at initial stages with a very small network, it is quite low).
At the end of the simulation, the growth rate was down to merely 2 % p.a. Surprisingly, the median count at this stage was not much smaller, at 86.

7 Likes

Let me see if I get this right… Anyone please adjust and correct where necessary :slight_smile:

A section with 60-120 nodes, of which at most 7 are elders, might not see that many elder changes.
If relocations are done primarily to smaller sections, that means that out of a general network growth rate, a majority is happening in the smaller sections. They grow faster when they are smaller, and slower when they are bigger.
If we assume a 12 % yearly growth rate when mature, a 60 node section might be growing much faster than that, while a 110 node section much slower.
I think the exact rates all depends heavily on how often relocations are done. Since relocation events are triggered based on hashes of events resulting in certain values, it is tightly connected to the probabilistic outcome of such values out of those operations on those hashes. But also on the frequency of the events chosen as basis for this.

The probabilistic outcome of that specific hashing function should be attainable (I guess?).
The frequency of events would depend on what type of events (can be read from the source code) and then based on that it is either tied to membership or other activity (or both) and that makes it dependent on growth rate and / or activity.

So, as with much of these things, we can only find the values when specifying certain parameters, such as growth rate and / or activity. If we do a reasonably good estimate of these parameters, we’d probably be able to derive an approximate BLS key update rate.


Does this sound about right, or have I confused things?

3 Likes

Yes, I agree. The hard to measure “thing” is that the network has financial incentive, albeit via the farming rate. That is hard to model as humans are a big part of this one. I suspect even a home computer running at a loss will still remain on-line and active, even just to get something back? I also think locally produced energy, more efficient nodes all will play a part. Ultimately where a node is run at zero cost, so any payment is great if it goes to cover the capital expenditure (buying the machine, setting up solar etc. etc.). In any case all of those parts make modelling difficult as the parameters we set will influence these and be influenced by these actions. I feel the less parameters we have to code then the bigger the chance of success, but again, open to huge debate as the algorithm the params are for could in itself be the issue. So getting the farming algorithm right is prob the biggest of the decisions we make as developers here.

15 Likes