More of a philosophical question but what does 99% faulty look like in reality? I feel there are some epistemological questions to ponder here.
Additionally, why is it 99%, rather than 99.1%? 1/3 BFT is clear, 99% is not.
There are some nice off topic components to address in this topic:
Only up to the point that the cpu / ram / network can support more vaults. Eventually the vaults get in each others way and they’ll stop being competitive / viable on the network. This is really interesting to consider, since it might start being almost like bitcoin mining where ‘computation speed matters’. Typically we’ve intuitively considered only bandwidth as the primary bottleneck, but the ‘many small vaults’ concept may lead to other ones. More thought on an extremely large network with ‘one chunk per vault’ is warranted, and would have implications to the less extreme idea of ‘many vms and many vaults per machine’.
The size of the network should be balanced, not pushing too small, not too large, but where’s the balance? How is it decided? How can it evolve and change in a useful way? Such a difficult and interesting question…
This is a question I’ve spent a lot of thinking time on and I think it’s possible to build a probabilistic model of the geographic distribution. But so far the overall answer seems to be a resounding ‘no’ for a lot of really practical (rather than theoretical) reasons.
It’s a good one to address since it’s essentially the main (only?) complaint Peter Todd has put forth about this (and other) networks dealing with redundant decentralized storage.
Solving it without trust is a really interesting question.
Not if all 12 are in the same datacenter and all apply clever misdirection with latency adjustments. If you know one of those nodes is on the other side of the world, then sure this would work, but how do you convince anyone else that the knowledge is true? It can only be done on a probability basis.
I am super excited to see these results! I’ve done a lot of testing myself and am excited to gnaw away at these future holes in the fabric of consistency. Please post them on the forum, people love reading about these things even if they seem ‘trivial’ or don’t confirm the initial hypothesis.
It’s hard to imagine ipv6 not being the standard for this network…
Whoever does the segmenting sounds like an authority to me.
Only if you trust the ping. Which you can’t. It might be artificially delayed.
This is a good point. There will be an extremely high frequency of events on the network, so even a low probability means it will happen reasonably often. It’s not enough to hand-wave it away, these things need to be engineered (see @oetyng above). Maybe the probability is low enough, but what’s the cost when it inevitibly does happen (maybe just by bad luck). The cost shouldn’t be outright ignored.