Proof Of Storage prototype

As usual, thank you mav for presenting your ideas so succinctly and help us focus on important issues… lot’s of very interesting ideas here. However, I don’t think the timing of proofs are as appealing as the good old fashioned “sacrificial chunks” from yesteryear…

I don’t think so.

I think the notion of spare or “free space” in the network is something that should be shied away from, (just like the notion of free time :wink:) In other words, a network with lot’s of free space is a network that is not maximizing its use of available resources. Instead, why not use this excess capacity by filling it up with many sacrificial chunks for maximum redundancy rather than special “proofs” or other objects whose only purpose is to check free space. When a vault joins the network, the vault managers can begin sending it sacrificial chunks until some stopping criteria is reached or an out of space condition occurs and the vault cannot store any more. This proves the the vault’s storage capacity to the network. Sourcing those sacrificial chunks will require older nodes to prove their own storage is valid. The number of sacrificial chunks serves as a distributed indicator of available network resources. Assuming that 33% of the nodes are corrupt, then the network will need a minimum of 12 copies of a chunk to ensure that 8 copies are valid. Giving a little extra margin, elders could just refuse to let new vaults join when there are 16 or 24 copies of all the primary chunks in a given section, which is an easy way to control the growth of the network and control GET rewards and PUT costs. When new PUTs are made to the section, some of the sacrificial chunks get overwritten, which would trigger the section to accept new vaults if the total number of copies for a chunk drops below the 16 chunk threshold. Sacrificial chunks have a lot going for them.

In my view, the network should never be idle. Section managers would continuously be requesting chunks from vaults to ensure the chunk is being properly stored/managed by the vault even if there were no human users doing so. The sacrificial chunks would continuously swirl and flow between nodes in a section, and I would presume that any vault/node who is unable to deliver would quickly be found out and removed via standard network error checking mechanisms already planned during the chunk retrieval process. I think this process should be transparent to the vaults and look no different than standard user puts or gets for data. Perhaps this automated verification rate would occur at some inverse multiple of human get/put requests in order to maximize the user experience while also maintaining maximum network load? Perhaps there was a reason that sacrificial chunks fell out of favor that I am unaware of?

7 Likes