Downside to large network size

The benefit to a large network size is clear

  • better security because it’s harder to control quorum of a close group
  • greater storage capacity for obvious reasons

But what are the downsides?

Does network performance degrade as the network grows? This post says the average route length grows as the network increases in size, which seems to suggest performance will decrease as the network grows. Would this be true?

What are the downsides to having a large network (ie many vaults / nodes)?


The network will cache data as it passes through. So even as routes become longer, it will be cached at every point along the way, making it faster for everyone else after that.


From my understanding this is true for individual chunks (packets of the chunk)

But for a large file the benefits of parallel chunk retrieval is likely to compensated for this increased time.

For a small network as opposed to a really large one

  • individual chunk times will be quicker
  • parallel chunk ability is minimal since if file is large enough then some chunks could end up coming from same vault
  • security etc is less than for large
  • caching has minimal benefits since the caching node is likely to be close or even the node with a copy of the chunk
  • small outages can “break” files while the outage is in effect. Data retrieval sould mean this is not permanent.

For a large network

  • individual chunk times will be slower due to increased number of hops (route)
  • full benefit from parallel chunk downloading
  • higher security etc
  • caching will compensate more than for small networks since its saving more time on chunk retrieval
  • small outages are usually of no consequence and requires a very much larger one to make files “broken” while the outage is occurring.

For small files (maybe < 3 chunks) the larger network will definitely be slower to retrieve them (unless cached). BUT for larger files the parallel retrieval will mean that the delay is only for the first 1 or 2 chunks and the rest will very likely be waiting to be shuffled down your internet link.

Caching will be the wild card since it can make popular files seem like they are residing on a small network no matter the size of the network.


Perhaps as the network grows, more copies of a chunk could be stored. This would increase the density and would likely reduce the latency to access said chunk (as statistically, the data will be geographically closer).

Remember that its the hops that slow things down. Even if the chunk came from next door it could still bounce around the globe a couple of times.


The general idea is to have fast GETs and secure PUTs. The splitting of Chunks in small messages of max. 20KB. is to secure PUTs but the GETs will be has fast as possible. You send messages with the hash of the data you want to retry and wait for the faster response to choose the node.
Of course in a larger network we need more hops in average (~one hop each time the network double size) but the possibility to find a path fast enough remains very high even in very large network size.

1 Like

Indeed - it is always important to remember
XOR space is NOT Euclidean space

Thanks for the responses.

To summarise and ensure my understanding is correct, the downsides to large network size is:

  • each chunk is slowed due to increased number of hops between the data source and destination (mainly noticed for small files since large files can operate on chunks in parallel)

All other points indicate solutions to this, which are not downsides :slight_smile: ie

  • security = improved for large network
  • caching = improved performance for popular large files on large network
  • parallel chunk operations = improved performance for large files regardless of network size
  • outages = improved rate of occurrence on large network

To rephrase the original question: are there downsides to the network being composed of many tiny vaults? Specifically, downsides that are not already stated above.

(I’m leading into a discussion about the incentive structure of safecoin for those wondering why this seems so semantically pedantic)


The more significant downsides I see technically in a near perfect world (no greed/overloading connections/etc)

  • need more processing power
  • More electricity to do the processing
  • Increased HOP count due to the larger network size. From memory its proportional to the ‘log’ of the count of nodes, and if non-tiny vaults has 10 million vaults/nodes then increase 100 fold for tiny it becomes 9/7 times (ave) the number of hops
  • need to reduce vault performance levels and this decreases network individual chunk download speed

Upsides could be due to increased node count

  • increased security
  • increases transaction rate
  • increased caching

Now move to a more real world with greed, human nature, crappy internet connections for a lot of people

additional downsides

  • overloading of links (cheating to get more vaults on each link) People find ways to get a couple more vaults on a link with a bypass that fools node aging into thinking each node is slightly better than it is which allows one or two extra nodes. Equates to even less network performance
  • Greed gets the nodes as small as feasible due to the perception that they will maximise coin earnings
  • Increased wastage of electricity and associated resources Bitcoin waste here we come :slight_smile:
  • More nodes go off line at once due to each person having multiple nodes instead of having 1 or 2 nodes.
  • people don’t own 10 core PC with 128GB so they grab their old machines to be able to have more vaults running (there is some limit to the number of nodes a processor can run)

In my estimation if we allowed tiny vaults and the performance loss which is needed to allow more nodes/vaults on a link then greed will kick in and people will understandably push it to the limit. I expect they will with whatever minimum performance level the network allows. But if tiny vaults are encouraged then this pushing the limit will magnify the disadvantages of tiny vaults.


I don’t think this is just perception. The sigmoid curve planned for farming will mean that people with a few extra TB of storage space (I’m making the assumption that that will be above the “norm”) would be silly not to run several smaller vaults if profit is a motivating factor.

1 Like

I agree but my thought was the tiny tiny vaults, the increased earnings will not keep increasing as vault size keep going smaller. There will be a point at which the increased coin earnings is at best theoretical and at worse self defeating.

Such things as the link speed cannot compete against the person who does not overload their link. In this case the more vaults/nodes you add only makes your earnings less. This can occur even when not pushing the limit but simply causing your response time for some chunks to double. For example if you have 2 vaults and only one has a chunk request then your link is at full speed but if a request occurs for both vaults at the same time and your link is not fast enough then both chunks are slowed and the chance of a reward is reduced.[quote=“wes, post:10, topic:13339”]
will mean that people with a few extra TB of storage space (I’m making the assumption that that will be above the “norm”) would be silly not to run several smaller vaults if profit is a motivating factor.

This cannot be taken in isolation to the other factors like link speed, RAM, processing power. There is diminishing returns and as demonstrated above becomes self defeating.

Thus my use of perceived. Perceived can be true in some circumstances and not in others. There will be a sweat point, but if the person does not account for that then they could go too far with the perception that more & more (vaults) is better.


I do see what you’re getting at now. I was not taking the line of thought out to that extreme, but it makes perfect sense. People way push the limits to the “paper says this works better” without taking reality into account.

I’m not sure how deploying many tiny vaults increases earnings though. Looking at the sigmoid, tiny vaults would be a write-off since their reward ratio would be close to zero.

But I do see tiny vaults being useful to farmers as a ‘dial’ to try to manipulate the network average and get whatever large vaults they have into the ‘sweet spot’ at 20% above average.

Am I understanding this correctly?


I thought he was talking about just the “smallest you could run on a budget” type tiny vaults.

Along the lines of “I have this raspberry pi, a USB hub and 15 left over old school thumb drives… Looks like I can run 15 vaults for nearly free and make some money. Better than no money”

Or a little less absurd is buying/finding/scavenging a boatload of 25-100GB drives for dirt cheap and making them all vaults. Basically “what is the cheapest I can get storage space and still make money from the network” Whatever that point is, people will try and go there.


Nor do I. Isn’t this what I said

Are we even following/programming the sigmoid curve? I don’t see in any suggested code in the RFC for safecoin to show that. Maybe as a consequence of what analysis I gave above about diminishing returns it will follow the curve in the whitepaper. I suspect it will follow a different sigmoid curve with one parameter for the small and another parameter set for the large. In other words the reasons the lower end tappers off is different to the resons the upper end tappers off.

I only see that if they act together. If a few (<10% say) decide to try this then it will have in my opinion minimal effect. Partly because people simply will not have really large vaults and it becomes a case of the tiny vault can only reduce till it hit the “resource/performance” proof that node-aging or whatever demands of nodes/vaults.

Its more a case that people will have small vaults if they can have multiple vaults/nodes on their internet links. As far as I can see for the majority the internet link will determine the number of nodes/vaults they can have and the resources the person has will then determine the size of the vaults they can have.

In effect the possibility for people to manipulate with “tiny” vaults will be minimal and only really available t those with the resources and link speeds (ie those on Gbits/s links and plenty of h/w) An I don’t see those being able to manipulate the “curve” by a significant amount. Thus is it worth it to them to “waste” resources trying to and not just maximise their vaults.

I would expect that once we can obtain stats from the nodes/vaults (as promised) we will have APPs that can tell how well the vaults are performing and be able to suggest new sizing parameters to help maximise the coin earning.

Please when I see the term “tiny vault” I take that as grossly under the average, like 1/100 to 1/10000th the size. Maybe a few chunks worth up to a GB


Node age actually helps a lot here. So as a node grows it ages (after several churn events), eventually it will be allowed to store data and get rewards, however if it cannot store enough data (increases with network age) then it’s penalised (killed and age/2). So smaller vaults will risk never ageing enough to get any reward.

I just throw this in to help the conversation (which I am enjoying), so sever factors are at play before safecoin to ensure vaults both behave and are capable enough, when they are capable enough they earn safecoin, but like infants they start with little and prove themselves over time.