Sorry if this has already been covered. I saw a couple other topics like this but they seemed to be more about the network’s coping mechanisms.
If a solar flare instantly fries 10% of the nodes on the network, or some timebomb virus spreads throughout the world and scrambles 10% of the vaults at the same time, would this destroy a huge proportion of large files?
Please correct this maths if necessary…:
Typical futuristic video files: 1 GB
Number of chunks in a video file: 1000
Probability of all copies of a given chunk going down (assuming 4 instances of each chunk and 10% network destruction): 0.1^4=0.0001
Probability of all copies of a given chunk staying available: 0.9999
Probability of all copies of all chunks staying available: 0.9999^1000=0.9048
Number of futuristic video files on the network that become fully unreadable: 9.52%
I’m guessing there’s no way to stop data being lost in catastrophic events. But if I understand this right, a user could reduce their risk of completely losing a video file by splitting it into multiple parts before uploading it to the network, which seems counter-intuitive.
It is OK that is what we do. So imagine 4 copies, likely different continents existing at one time.
Network partitions are a common theme of thought experiments. There is a lot to discuss there, such as you split off from the network in section A, if I am in B you cannot double spend there as you are not in B, but if you were or any vault was the network will recombine.
Its a huge area and we will go much deeper into it over time. Right now we won’t though as partitions are secondary to launch, but don’t get me wrong it is a big issue. The key is partition of consumers and creators happens in pairs. I will explain much more as we go along, but think of this pairing. If I have a signed safecoin for instance you cannot in a partition use it unless I sign the transfer so if I can see you then you can see me and vaults will be in the same boat.
You will hear experts talk about CAP theorems etc. to me ti is a small issue if the system is naturally designed. It is a huge drain though ot go right into it right now, so perhaps cop out is my label for the moment, but it does not present an issue of worry to me.
The 4 copies are already taken into account in his maths and 10% of the nodes are wiped out (the location doesn’t matter). @to7m demonstration seems correct and leads to the conclusion that 9.52% of 1 GB files are impacted.
Btw, has there not been a proposal for 8 copies?
That would bring us down to 0.001% of 1GB files. A more acceptable level. But then again, the likely hood of 10% lost nodes needs more fundament. That is an essential part of comparing the various costs of different alternatives.
Not sure with what O the data traffic increases with every doubling of # copies, but probably quite a lot.
Perhaps this is something which should be driven by the client requirements. If twice the price could be paid for twice the number of copies, people could balance their costs vs redundancy requirements.
I suspect this would add additional complexity, but could be something for the future.
Well I thought that only the test networks (alpha) were limited to 4 copies. And always considered that the live network was increased from 6 minimum to now 8. Also archive nodes will make a big difference to this “wipeout” scenario.
While nodes can be wiped out, the archive nodes will be storing the chunks on permanent media and this only need to be brought back online. Thus I do feel we need the “archive node” from very early on.
True, But I think that its a good thing to consider that 10% disappear.
Now for this to be a problem the nodes have to die within a very short length of time. This to me is the hardest to model for and estimate the effects.
For instance if it was a solar event then the 10% might be over a 24 hour period and almost nothing could be lost. If it was a zero day virus that hit at the one time globally then you might see a window of 15 minutes due to PC clocks variance. If a natural disaster then it might be over a few days that these 10% or even more go dark.
Now if its over more than a few minutes then shouldn’t the network be busy making new copies of the chunks as each vault dies.
Then if the event does not destroy the disk on every one of the computers then it would be possible that some of the affected computers could have their vaults brought back online and thus restore those lost chunks. Even if 1/5 of the vaults could be brought back online then the loss is even less
So for the analysis in the OP to represent what will happen then we have to
have 4 copies only
The event causes all 10% to die within a few minutes (or even 2 minutes)
The computers that die cannot be able to be restored
It is most likely that the event will occur over a period of time that allows new copies of the chunks in the vaults that die to be copied to other vaults. If its an event that doesn’t allow this then there is a chance that some of the vaults can be brought back on line. Then in the very slim chance that none of this happens then yes if only 4 copies of each chunk is kept we will lose some large files. But this would be a rare occurrence. Then knowing that the final system is to have 8 copies of chunks then that makes the situation even less disastrous. Even to get all to die within 2 minutes is so unlikely its hard to imagine it happening globally, and that the event more likely will be happening over a period of time >15 minutes.
Oh another error in the OP probability is that the equation assumes that a particular chunk can have multiple of its 4 copies in the one vault. It makes not make a distinction. While this maybe creates only a minor difference it still would need to be included for accurate results.
The fact that one vault can’t contain multiple copies of the same chunk doesn’t factor into it. It would only make a difference if the attack is directed at a safecoin farming company with a low number of vaults.
With more copies per chunk, the risk obviously does go down a huge amount (gets multiplies roughly by the number of nodes and divided by the total number), but I think my point isn’t that data could be lost (which is kinda obvious).
The only real problem I see is that the risk to a file grows with the file’s size, and that it will end up being safer to split large files before uploading them. This means that security fanatics will rightly use a separate protocol to split their files into chunks before uploading the files, instead of letting the network split them into chunks.
Of course I don’t know how the network works well enough to say that with any certainty. So my question now is: Is there any incentive for users to not automatically pre-chunk their large video files?
Currently the replication factor is 8 but as indicated above by @dirvine and @to7m and also in some other topics, 4 is the new planned value.
I think this is related to the number of elders in the data chain proposal. They call this number group_size also. I argued there to give another name to it, so that these numbers could be different, but in vain.
All is needed is a careful attack on medias and social networks similar to the segwit2 failed attempt to fork the bitcoin network. The attackers can launch a competing network, pretend that it corrects some weaknesses of the original safe network, add that the latter is controlled by big corporations and finally ask farmers to stop their vaults at a precise UTC date and time to switch them to their alleged better network.
A solar flare would hit approximately %50 of the planet (ie. I figure the summer day side would be worst). An EMP could wipe out the network and electrical infrastructure of an entire country. I would hope that the protocol will offer redundant copies of data spread geographically over and within each continent. I think Amazon S3 manages this by automatically making 4 to 8 copies of data within a single network (continental subregion) and the user is required to manually transfer their data to other regions of their choice if they want additional data safety. The degree of redundancy doesn’t necessarily need to be the same for all data and the user would pay more for additional redundancy. Ideally one would want the data spread automatically, and the user would need to specify how many more copies above and beyond the system default they are willing to pay for. One might be able to construct an algorithm to place copies on opposite sides of the globe first. For example, let’s say St. Nick has excellent internet access at the north pole. His first 4 copies get stored in a region within 500 km radius of the the north pole, while a second set of copies migrate to a similar sized region centered on the south pole. As his need for added redundancy for the important data (like the naughty/nice list) rises, he could pay to have more copies that are spread over the global network until the two geographic regions overlap at the equator. A naive approach might just be to split the globe into predefined zones, like patches of a soccer ball. Each zone would attempt to contain exactly the same data. Building equal infrastructure within each zone could be incentivised based on the capacity of existing infrastructure. For example, someone in Africa might get 10x the SafeCoin revenue for the same 1TB added to the network until the infrastructure capacity between those regions have equalized… granted this requires some type geolocation information that can be trusted.
And what about in the case the whole planet is affected? Like say a solar mass ejection or something on that scale that causes planetary scale EM pulses and the like. Granted these are rare but they do happen from time to time and I’d hate to think we’d end up uploading the majority of human knowledge to the internet only to find out oops it’s all erased the next day because of a natural disaster that took less than a few seconds and knocked us all back to the 19th century.
I don’t find it hard to imagine it happening globally as I was listening to a podcast on solar storms just yesterday. I’d rather prepare for the edge case of a global event that PROBABLY won’t happen or will rarely happen than deal with a global collapse of civilization and/or the erasure of known human knowledge. Call me paranoid if you like but I’d feel safer knowing it was dealt with rather than the whole system fail when we get smacked by the proverbial windshield.
You have to consider the anonymity and security in this model too. Vaults do not report their geographic positioning to prevent communication tampering. Vaults are assigned randomly to look after data.
Increasing copies is something that could be done though. I suspect the initial PUT operation could define how many copies should be retained, but I agree that the defaults should be sensible.