SAFE Network Dev Update - July 25, 2019

120 for live as well, over 60 per section puts us in sybil safe territory

Yes 7 in total.

Not sure I get this, the sections will go from 60 → 120 split → 60 → 120 etc.

Constant

Section keys will only be affected by Elder churn. Elders are the oldest so churn the least if that helps. Every Elder churn requires a new BLS key signed by the last BLS key. THere is a small chain of such keys. We only need to keep the longest chain that represents the oldest section we know of that we have not updated yet (see secure message delivery rfc for that).

Hope that helps, it’s dev update day :slight_smile:

9 Likes

How many of the probably starting 60 (after split) vaults can be lost (without replacement) before there would be problems? E.g. can it go to 20?

1 Like

Depends on how much data that section has really. As vaults fill up new ones are attracted to the network. This is why we want to push to beta ASAP and tweak/Analise this kind of thing. Also archive nodes make a huge difference. Fro launch we should be OK though if safecoin works as intended.

5 Likes

Any progress on a segmentation of the internet (eg country goes “offline”) and what happens if there is a sudden loss of say 3 elders?

2 Likes

Isn’t it more “beautiful” to have binary divisible section sizes? 256 instead of 200, 128 instead of 120, 64 instead of 60, etc…

3 Likes

60 and 120 have their own beauty because they are divisible by 1, 2, 3, 4, 5 and 6.

10 Likes

Can’t beat the Babylonians.

The Babylonian system of mathematics was a sexagesimal (base 60) numeral system. From this we derive the modern day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 degrees in a circle.[10] The Babylonians were able to make great advances in mathematics for two reasons. Firstly, the number 60 is a superior highly composite number, having factors of 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 (including those that are themselves composite), facilitating calculations with fractions. Additionally, unlike the Egyptians and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values (much as, in our base ten system, 734 = 7×100 + 3×10 + 4×1).[11]

14 Likes

Thanks David

I’d have to check this out, what median section size would be for various growth rates. I guess that’s the main variable for it.

Ok, I know these things it was not quite what I was wondering. I suspect there is no easy answer, but asking just in case you might have some idea I missed. So, I’m trying to figure out what the BLS key update rate curve looks like.
So the reason is that I try grasp what the inter-section communication ramifications look like. What is the rate for various conditions in the network, what can we expect see etc.
My initial feeling is that this is not so easy to determine, but what ever thoughts or clues you might have would be great.
I am on the beach now but will try lay out my initial idea for what is necessary to determine the BLS key update rates later.

Anyway, thanks a lot!

4 Likes

Unless that was meant to be a snarky comment I still don’t follow you. Section prefixes are naturally expressed in binary. For example, the ID’s of nodes in a section of size 128 will sort in natural order to 6 addition bits after the prefix. Easy to form section subgroups as well. Binary seems like the obvious choice here, was hoping for some clarification…

@Zoki’s answer is more reasonable.

1 Like

OK, so for the specifics that I used, the median count didn’t change much, even when growth rate changed, which was a bit surprising.
Relocations were simplified in the simulation, not attempting to resemble the actual implementation (which I have also worked with when I translated mav’s google attack sims, but thought it overkill here). I think the principle remains even with 100% accurately modeled relocations.

Simulation specifics

Adding is done to a random section. Sections split when at 120 agents (the more general label agent is used instead of vault).
One relocation in the network is done for every new agent that joins. No leaving of agents was simulated.
Every relocation happens by choosing a random section. Then a random group out of that section.
A random upper limit of age is chosen, then a random value between zero and that upper limit - the ageLimit.
First agent in the group, ordered by age, that has age less than the ageLimit, is relocated.
It is relocated to a random section within the third of totals, that has the fewest agents. So, adding this preference of relocation to sections that have fewer agents, increased the median count slightly (a couple of extra agents, so not much).

Results

With this, I observed median agent count per section ranging from 83 to 89, over a span of 5300 days and an initial agent count of 101000 agents, and end agent count of 682251 (so, quite low growth rate compared to what we aim at, but that’s besides the point).

Median agent count of 89 was first observed at an annual growth rate of 91 % (which might sound high, but at initial stages with a very small network, it is quite low).
At the end of the simulation, the growth rate was down to merely 2 % p.a. Surprisingly, the median count at this stage was not much smaller, at 86.

7 Likes

Let me see if I get this right… Anyone please adjust and correct where necessary :slight_smile:

A section with 60-120 nodes, of which at most 7 are elders, might not see that many elder changes.
If relocations are done primarily to smaller sections, that means that out of a general network growth rate, a majority is happening in the smaller sections. They grow faster when they are smaller, and slower when they are bigger.
If we assume a 12 % yearly growth rate when mature, a 60 node section might be growing much faster than that, while a 110 node section much slower.
I think the exact rates all depends heavily on how often relocations are done. Since relocation events are triggered based on hashes of events resulting in certain values, it is tightly connected to the probabilistic outcome of such values out of those operations on those hashes. But also on the frequency of the events chosen as basis for this.

The probabilistic outcome of that specific hashing function should be attainable (I guess?).
The frequency of events would depend on what type of events (can be read from the source code) and then based on that it is either tied to membership or other activity (or both) and that makes it dependent on growth rate and / or activity.

So, as with much of these things, we can only find the values when specifying certain parameters, such as growth rate and / or activity. If we do a reasonably good estimate of these parameters, we’d probably be able to derive an approximate BLS key update rate.


Does this sound about right, or have I confused things?

3 Likes

Yes, I agree. The hard to measure “thing” is that the network has financial incentive, albeit via the farming rate. That is hard to model as humans are a big part of this one. I suspect even a home computer running at a loss will still remain on-line and active, even just to get something back? I also think locally produced energy, more efficient nodes all will play a part. Ultimately where a node is run at zero cost, so any payment is great if it goes to cover the capital expenditure (buying the machine, setting up solar etc. etc.). In any case all of those parts make modelling difficult as the parameters we set will influence these and be influenced by these actions. I feel the less parameters we have to code then the bigger the chance of success, but again, open to huge debate as the algorithm the params are for could in itself be the issue. So getting the farming algorithm right is prob the biggest of the decisions we make as developers here.

15 Likes

I figured this was probably the case. Which raises the question how much tinkering will be possible after the network has gone live? I’m guessing the algorithm will be self-adjusting to an extent, but what if the underlying economic assumptions change radically as a result of energy costs, for example. Could a new version be introduced without knocking the whole network economy sideways? I’m not expecing a definitive answer now btw, but I guess it’s one of the ‘known unknowns’ that will need to be tested out.

11 Likes

Yep, hard to model because of the human interaction, as well as the parameters influencing each other and so on. It’s exactly what I have found the most challenging in the simulations.

I’ve worked with this assumption. So, someone usually having the machine on half the day, would then leave it on 24/7. Electricity bill goes up a bit, possibly higher write off of the computer - but that is not going to be accounted for by most. So simply the addition to the electricity bill. No (relevant) limit on bandwidth assumed.

If only this extra electricity is covered, then I think a very big part of home users would stay.
Anything above it would ensure a majority staying I think. (For commercial operators incentives are different of course.)

So, to just play a bit with the numbers:
At say… 600W of power consumption when working fairly hard, during 12 hours, that’s 7.2 kWh, and electricity is cheap in Sweden so about 0.1 EUR / kWh.
So, 1 EUR per day in extra costs for the home operator perhaps. If they give 1 TB, of which 50 % is filled, and a general read:write ratio is 99:1 and data is accessed primarily during the first 3 months, then we have say 90 GETs per MB over 3 months, giving 1 GET per chunk per day = 500 000 GETs per day for this user.
To break even, each GET must reward safenetwork currency equivalent to the fiat value of 1 / 500 000 EUR. With MAID at 0.2 EUR, this gives farming reward of 10k nanos per GET.

I agree with this as well.

Well, you could liken it to the Bitcoin “upgrades” I would say. It’s not trivial IMO, because the community can be divided and there can be divisions and deadlocks, which can stall a decision on an upgrade of the network.

After the point that an upgrade has actually been agreed upon, implemented and accepted by enough users, I would say that such changes would absolutely be possible without knocking it entirely sideways. Of course the opposite is also possible, but I mean to say that it seems to me that tweaks can be done in certain places where we know the boundaries of the outcome better. There are also things that could be tweaked in places where the boundaries of the outcome are less known. This depends entirely on the actual chosen system for the economy.

6 Likes

I think in the beginning the payment should be 5-10 times bigger because the interest of the general public is very small…

Well, it’s hard to say at what fiat value user will deem it attractive. And we don’t know what fiat value the network currency will have. All we have is a predetermined algorithm to dole out the network currency.

This is not an equation that solves, IMO.

But there’s a solution for it, I believe :wink:

@JimCollinson really love the The Perpetual Web: Editing Screencast.

Clueless consumer question: Do you know about codepen.io, because this walk through almost reminds me of that? It would also be fun if I could fork a website :sweat_smile:

2 Likes

If I can get Solid IDE working you’ll like that too. Bringing that together with Jim’s PW UI would be interesting :slight_smile:

Screenshot here

3 Likes

Can you please share some working versions so we can try to elaborate it ?

1 Like

Yeah, it’d be nice to get the ease of use of something like codepen. We’ve got to tackle a little more in terms of site structure, publishing workflow etc, which will be a lot of our focus, with the editing being quite simple to begin with.

Perhaps fork isn’t quite the right term, but yeah, it could be super simple to create a new draft site from an existing one, and build upon it. Hurray for immutable data, and the flexibility of the Name Resolution System! I can’t promise it from the off, but should be totally doable.

10 Likes