Proof Of Storage prototype

1 GB

Took 22 seconds (0m22s) for initial (honest) creation.

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 236 missing dependencies, 4.1 seconds to recreate

90% of proofs had 66 or less missing dependencies, 1.15 seconds to recreate

66% of proofs had 4 or less missing depedencies, 0.12 seconds to recreate

10 GB

Took 183 seconds (3m3s) for initial (honest) creation.

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 788 missing dependencies, 13.7 seconds to recreate

90% of proofs had 84 or less missing dependencies, 1.48 seconds to recreate

66% of proofs had 5 or less missing depedencies, 0.086 seconds to recreate

100 GB

Took 2452 seconds (40m52s) for initial (honest) creation.

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 1802 missing dependencies, 35.5 seconds to recreate

90% of proofs had 78 or less missing dependencies, 1.5 seconds to recreate

66% of proofs had 5 or less missing depedencies, 0.093 seconds to recreate

Sure, I agree it’s not worth cheating 1 GB on face value, but the point is to get a sense of the algorithm characteristics. I’ve sorta assumed the magnitudes scale linearly (which in theory it should and the figures above indicate it does). So cheating 1 GB can be pretty accurately extrapolated to any size. I can see how that assumption is not at all obvious from the previous posts though.

I’ll see if I can crack out my raspberry pi or pine64 and do a test. Ideally it shouldn’t be too far away from a desktop computer, since it’s meant to test space rather than cpu, right?!

Good idea, it would need to be tested to know how much it improves things, but I imagine it serves a similar purpose to the ‘depth’ parameter. Except depth works at the test level rather than the generation level so the test idea is a good extra thing to have. I’ve mainly focused on the generation part of the algorithm rather than the testing part, but you make a good point that the test design can also be used to improve robustness against cheaters. Changing the test parameters is also a lot more flexible than changing the underlying algorithm (which would a full regeneration for every vault) so that’s another advantage.


I also came across Burst today which uses Proof-of-Capacity (prior thread). Some info about how they create proofs and how proofs are used for mining. There’s a fast POC miner written in rust called scavenger! Might have to try it out. They currently have about 250 PB on their network (source). Peak was almost 400 PB. Mining started in Sep 2014 at 5 TB and within two months grew to about 5 PB where it stayed for about 2 years. Hard to imagine that capacity staying with burst instead of SAFE.

I’d say POC in the current form is not a good algorithm for SAFE because it relies on shabal hash which would be a waste; the algo may as well use the crypto primitives needed for regular network activity so any hardware optimizations have multiple benefits. But I’m going to look at burst and POC in more detail because it’s out in the wild and seems to be proven to work. Might be easy to change the hash algo and keep the rest of the process if it’s robust.

5 Likes