What about a catastrophic event that wipes out millions of nodes

As I understand it, it will be more complex than 8, and more of a “there’s these main chunks, but also these backup chunks, and these other kinds of chunks” kind of thing which makes the calculations a lot more complex. But assuming 8 for simplicity…

Under the simplified system, 50% network destruction to the power of 8 copies is 0.390625% chunk destruction, or 99.609375% chunk survival, which means every file consisting of 6 chunks has a 2.3% chance of being wiped. Pretty good deal, but the bigger files get a worse deal.

Storing multiple copies of a video file would cost twice the amount as pre-chunking the video file, and still have a much higher risk of the file becoming completely unreadable :confused:

I think people won’t split their data because that’s hassle. It will be every program that talks to the network that automatically splits the data. I must be missing something vital, because I can’t imagine why any program wouldn’t pre-chunk the data by default, thus making the network’s ability to store large files obsolete.

1 Like