I started to wonder how does the mathematics work in case a bunch of nodes go offline. What is a chance, that a file loses a chunk? How does the size of a file, size of the network and replication count affect the probability of survival?
From that thread I found a calculator by @to7m that can be used to calculate the survival rate of a file in a different scenarios. If I understood it correctly, I got the following result, for example:
There is a 99,9% chance of no chunk loss
When 5% of the network goes offline in one instant
When replication count is 5
And chunk size is 0.5MB
For a 1.6GB file.
There is a 90% chance of no chunk loss
When 10% of the network goes offline in one instant
When replication count is 5
And chunk size is 0.5MB
For a 5.26GB file.
Are the above numbers and the calcualtions behind them correct? @dirvine, was it so, that missing even a one chunk renders the file useless in all cases? Or could it be that is would be just glitch in a video, for example?
How does these numbers compare to other storing methods? It seems to me that quite small number of nodes going offline can cause problems for bigger files.
This then leads to situation, where the bigger number of chunks makes the file more vulnerable. I don’t know if the chunk size could vary based on the file size, to lessen the odds of the file becoming unusable. Maybe some chunks could be 10 or even 100MB?
EDIT: Copied from the calculator table, sorry for the formatting:
|prognosis of file:|0.999|
|sudden network outage:|0.01|
|copies per chunk:|8|
|chunk size (bytes):|10000000| (10 MB chunk size)
|prognosis per chunk:|1|
|file size for 0.999 prognosis:|90.11 EB|
But it varies already, doesn’t it? Now even the very small files get divided into three chunks.
I don’t recall what that recent change was - it didn’t use padding? Don’t recall, even though I remember asking about it - will have to dig up the response. Even so, that would only mean small files would have smaller chunk and so end up paying more presumably. If larger files have larger chunks … how do you determine how big and how much to pay and then what about node size - can’t be fixed maybe … lots of problems.
Even so, that would only mean small files would have smaller chunk and so end up paying more presumably. If larger files have larger chunks … how do you determine how big and how much to pay and then what about node size - can’t be fixed maybe … lots of problems.
Yes, and there was recently a discussion, where 0.5MB was deemed somehow ideal. But I don’t remember why exactly, but I’m sure the data persistence was not taken into account. In that sense raising the biggest chunk size to 1MB or 2MB would make difference.
I tried to find a good discussion about this all, based on mathematical facts, but was not able to find it. I’m not sure if this is really discussed, but I think now is the time, and I would like to see some folks more capable than me sharing their views. Paging @dirvine, @to7m, @mav.
Also, I remember the discussion being more about how to cope with large outages. I’m also interested in thinking how even quite small outages can cause problems to big files.
I don’t know what are the industry standards, what is deemed as reliable?
For example the following calculation shows that even a 5% outage causes the survival rate to be lower that 0.99999%, for files bigger that 160MB. But is that good or not? Would be good to get some graphs by some folks with more knowledge.
Chunks are information theoretically secure, i.e. even quantum AI could not crack them.
Yep, but not what I meant. I was inferring that deduplication saves space but with AI and LLM’s we have a new way to compress information that perhaps beats out deduplication, so self-encryption (which as I understand it allows dedup) isn’t needed in particular and maybe opens up the door to something more forgiving in terms of data loss. But self-encryption & dedup have long been a promise of SAFE, so I doubt they’re going away.
How is that done? I thought the location of the chunk is random.
The key is, so are the nodes placement. They are equally random.
Yep, but not what I meant. I was inferring that deduplication saves space but with AI and LLM’s we have a new way to compress information that perhaps beats out deduplication,
Yes, this is where I am with it all, however until there is a perfect encoder (enconding data → vector database or similar) then it’s arguable the raw data needs to be available.
I belive we will get to that point at some stage, but I don’t see many working on it. Although we can merge weight/parameters of differing models now and with some help we could merge weights of all models regardless of architecture.
The issue then will be quantization levels and underlying algorithms, i.e. recently a new RRN outperformed an attention based NN.
@Toivo this is an important aspect that is left out of your probability assessment I think. It means not 5 copies, but 6 (with archive) or 7+ with cache depending on how heavily cached. What happens if you change the copy number to 6 in the calculation?
That doesn’t lead to geographical distribution of every chunk
It means the probability of all chunks existing in one geography reduces as the network grows The point I was making is a 5% outage does not mean 5% of a file is gone.
If we get the math model right here, set parameters such as loss==perpetual loss and so on then it will help.
So the probability as you point out that X can happen can be very low, so trying the same thing many more times increases the probability (your large file example) . It may be easier to focus just on the loss of a single chunk, regardless of any file size.