Update May 12, 2022

3 will never == 4, but what is enough? ATM it’s 3 but until we are stable and testing more we won’t know. It will be down to probabilities and how quick we can replicate versus what size the replication set is. The more replicants the larger the set and therefore time to replicate, which balances against the speed of churn. It’s another interesting area where we always wish we had instantaneous data copy :smiley:

12 Likes

To add, an interesting idea here is,

Have a large replica set (maybe full section)
replicate data based on closeness to the address of the churned node.

This gives us a bit of leeway in some ways that may allow a bit of error in churn before replication completes as it may not need to be totally complete.

Anyhow you can see the interesting angle here.

7 Likes

If the first iteration of this (3 vs 4) turns out to be inadequate at some point after launch, what are the chances of a successful pivot to another number via upgrade down the road?

3 Likes

Of course, we’ll share anything fun here in the updates.

Thanks for the offer! I’ll keep an eye out for anything that may fit the bill. The inevitable DBC audit may be a good candidate.

9 Likes

On the 3 vs 4 discussion, more things come up than just the ability to fail. Through random selection, there’s a not insignificant chance that 3 nodes could all be in located in a similar geographic region or regions far away from the person storing and accessing the data. The chance for high latency data traversal to access data is significantly reduced for each additional replicated node.

6 Likes

Widening the 3 vs 4 discussion a little, I remember much being made of frequently accessed data being cached locally and the more popular a download, the faster you were likely to get it. I had imagined “cached locally” would mean nodes not onl storing the chunks they are assigned on new uploads but also the transient popular cached chunks.

This would demand a lot more than 3 copies of the chunks for the popular downloads. What is the current thinking on this? Or have I misunderstood?

3 Likes

This has been discussed in the past, but I am interested to hear the team’s thoughts on this now that node age has been fleshed out more. It seems that eventually all elders will end up in data centers just because these are the only systems that can achieve the kind of uptime required to reach this status. There will likely be data centers that will focus a lot of resources to get the maximum number of machines to this status since higher node age == higher rewards. So not only will these data centers have a lot of elders, but also a ton of adults trying to get there. Say this happens in a country (or countries, heres looking at you five eyes) that becomes hostile to safe and these data centers get taken down simultaneously before data can be relocated by the network. It seems that you could get into a situation where the system couldnt recover if enough nodes are taken out at once.

I think this node age system is great for performance and rooting out bad nodes since youll always get the fastest most stable nodes competing for elder roles, but it seems it will slowly become more centralized over time. Is there a concept floating around where elders “die” after a period of time and are reborn back at 5 or some other value? Or maybe adults that behave properly but dont always respond in a timely manner (which would likely be home user nodes) being more of a backup of last resort mechanism?

Another thought, having elders die after a period of churns could also be beneficial for the network. If you think of nodes like cells in a living organism, they need to die and be replaced. If it takes a couple years for a node to reach elder status and it remains elder for a year or so, that hardware is starting to show its age. If the clock resets then the owner got his money worth for that machine, time to slide a new system into the rack and start over. Is there going to be a way to migrate an elder “image” to a new machine?

4 Likes

This would open up, or make easier a new attack surface though, selling your elder to a bad actor.

5 Likes

I think effort towards making the secret key (sk) hidden to any node was to prevent key selling attacks like you’re mentioning. It was disabled temporarily awhile back but that was an approach.

Also not sure how fleshed out this is but in the past there have been discussions of trying to make high end data centers less profitable than an at home node. Not sure how that is achieved but it’s not to discourage data centers altogether but rather make at home nodes competitive for the benefit of decentralization.

6 Likes

And other question remains how the one with 1gbps will be rewarded compare to one with 1mbps.
If there is no benefit to cover the high speed cost, than who will let it run at maximal speed.

3 Likes

As long as nodes are “good enough” compared with the neighbours then we are all good. Good enough does not mean fastest and this is what we need to confirm. i.e. if clients are happy and the responses are good enough then the network should be happy. Right now we are simply measuring nodes against each other, which is phase I. Phase II is allowing clients to “tell” us how good is good enough.

This is where we make it attractive to all nodes, not just data centres, at least I hope that is the case.

14 Likes

What you say is speculation at this point but let’s assume you are correct about this concentration in data centres. By the time Safe Network ends up in this scenario it is likely it would be in widespread use so it would be like taking down the whole internet to stop child porn.

4 Likes

Ah yes, that is a very good point. A bad idea indeed.

2 Likes

I think youre probably right. I base this on how centralized bitcoin has become. Over time the bigger players with the most resources became the largest contributors to the hash rate. If the profit incentives arent balanced properly safe could end up in a similar situation.

3 Likes

But I like the idea of nodes having a limited life span. It has been discussed before a couple of times. Tried to search for it, but didn’t find the discussions. There were some pros and cons, that I don’t remember right now.

Anoher idea: Could it be possible to have a donation address, where you could donate SNT to the Network itself? Eventually it would mean just all the farmers of course. I was just thinking, that if the reason for killing of the first nodes that are launched by Maidsafe is to not hoard the rewards, maybe the rewards could just be donated to everyone else? Then it would make sense to keep these good, honest nodes running?

2 Likes

As the network grows the significance of any early nodes diminishes so I doubt this is the way to handle things.

Better to work on preventing the kind of centralisation which @zettawatt is concerned about, or preventing such centralisation from being a problem. I suspect MaidSafe have thought a lot about this as the design evolves.

6 Likes

Datacenter centralization is inevitable I gues. All the randomization in data and section location and work of latency against it, I can imagine it should roughly follow population density of developed countries. There are many datacenters around the globe in many different jurisdictions, the risk should be lower than in bitcoin mining which centralize around electricity price.

Some virtualization platforms allow to move running virtual pc between different servers without stopping it. Programs inside have no way to detect they are suddenly running somewhere else.

9 Likes

perhaps the cost of renting several hundred nodes may be a factor for Maidsafe, a (neccessary) expense that needs reduced as soon as other qualified nodes are available

2 Likes

Thx 4 the update Maidsafe devs

It’s always nice to read about how the network works.

Jippy! Finally we got a Safe Labs :clap: :clap: :clap: @davidrusu 4 exploring the cutting edge.

Would have been nice if there was a vid of this :nerd_face:

Keep hacking super ants :stuck_out_tongue_closed_eyes:

9 Likes