For your 2 points: I tried to cover these in my previous (long) reply.
Yes, true. Reasonable limits could be imposed.
Perhaps new vaults could present a bandwidth capacity/rate statement to the section. The section would then throttle automated chunk interrogation to the vault based on the reported limits.
I’m not sure we have a contradiction here. We can talk about these 3 different kinds of “content”:
- data that the section stores,
- data that the section is caching,
- proof of free space.
Caches are (or can take the place of) the sacrificial chunks, and they can count towards either of the free space or the stored data, depending on what we’re looking at. But I may be missing something so let me know.
No. Tests shouldn’t ask for entire proofs (that would be wasteful) only a few bytes from here and there. This also means that, after a vault first initialized its proof store, it could store away a number of samples from the high blocks so it could use them to test other vaults even when its own blocks are overwritten. In effect, a vault can test the free space in other vaults up to the total space of its own.
If the cache is stored on disk that is.
Well, yes. It depends on how much we cache and for how long. I now realize sacrificial chunks were more like an on-disk cache and “caching” was referring to a really short in-memory store.
It certainly makes sense to store away immutable data that is requested repeatedly, so that makes more sense for performance.