Increasing the depth from 10 to 100 would mean ten times more previous proofs are required to generate each successive proof.

This change is intended to make it much harder to cheat but not much harder to generate the initial proofs.

**1 GB**

Took 80 seconds (1m20s) for initial (honest) creation (compared to 22s for depth 10)

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 493 missing dependencies, 39.0 seconds to recreate (compared to 236, 4.1s for depth 10)

90% of proofs had 386 or less missing dependencies, 30.9 seconds to recreate (compared to 66, 1.15s for depth 10)

66% of proofs had 156 or less missing dependencies, 12.2 seconds to recreate (compared to 4, 0.12s for depth 10)

**10 GB**

Took 521 seconds (8m41s) for initial (honest) creation (compared to 183s for depth 10)

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 4740 missing dependencies, 223 seconds to recreate (compared to 788, 13.7s for depth 10)

90% of proofs had 3645 or less missing dependencies, 171 seconds to recreate (compared to 84, 1.48s for depth 10)

66% of proofs had 1450 or less missing dependencies, 68 seconds to recreate (compared to 5, 0.086s for depth 10)

**100 GB**

Took 6895 seconds (1h54m55s) for initial (honest) creation (compared to 2452s for depth 10)

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 45791 missing dependencies, 3910 seconds to recreate (compared to 1802, 35.5s for depth 10)

90% of proofs had 34215 or less missing dependencies, 2470 seconds to recreate (compared to 78, 1.5s for depth 10)

66% of proofs had 13052 or less missing dependencies, 585 seconds to recreate (compared to 5, 0.093s for depth 10)

**What does it mean?**

Cheating has become much harder but initial proof creation only a little bit harder. Thatâs a nice characteristic.

Initial generation time is a bit slower, taking about 3x longer when depth was increased by 10x. This is a reasonably good result, since depth is only meant to control how many prior proofs must be retained to avoid cheaters, not how long it takes to initially create proofs. Ideally increasing depth shouldnât add *any* time to the initial generation, but this is real life and increasing security comes at some cost.

Cheating becomes much harder, which is the intended purpose of increasing depth. The difficulty to recalculate missing proofs becomes exponentially harder as the total number of proofs increases. A 10x increase in depth created up to 1000x more work for a cheater (in this particular scenario).

In this case, the benefit of increasing depth from 10 to 100 is much greater than the cost, so overall it seems like a positive thing to do.

How should a value for depth be chosen? Increasing it more will eventually lead to almost no additional improvement to cheating. Itâs a tradeoff, we want to keep depth low to avoid taking too long for initial generation but we want to keep depth high to make it very hard to cheat by discarding proofs.

**Whatâs the point?**

Zooming right out, recall why proof of storage is useful to SAFE.

Knowing how much spare space exists allows the network to manage stress better.

The proof of storage mechanism is like a sensor, providing additional knowledge to the network, which can be used to optimize the control mechanisms (such as safecoin creation, storage pricing and new vault disallow rules) and reduce stress on the network.

This prototype algorithm would be less cost (to both user and network) than the sacrificial chunks idea because it uses almost no bandwidth.

Itâs good to consider whether the stress can be managed without the extra work required for proof of storage.

Itâs hard to say whether the overall benefit of any proof of storage mechanism is greater than the overall cost.

The cost this particular algorithm imposes seems to be low enough that it could be considered usable for the problem (ie itâs not an impossible problem to solve).

Iâm still undecided about proof of storage vs âlet the farmers worry about itâ.

Yeah the unluckiest proof is âextremelyâ unlucky compared to merely âveryâ unlucky worst 10%.

This is where increasing the difficulty to have more iterations of the signing operation or the hashing operation will allow the time-to-recreate-missing-proofs to be increased enough to be meaningful (along with a suitable depth value as explored in this post).