What happens when a vault is full?

What happens to a vault if it’s full and has no storage space left?

And a bunch of related questions:

Does every vault in a section store every chunk in that section?

Does the amount of redundancy of a chunk equal the number of vaults in that section?

Is the amount of data stored in each section approximately equal to the total data on the network divided by the number of sections on the network?

What behaviour constitutes a misbehaving vault? Is running out of space one of these things?

Is there anywhere I can read specifically about the various punishments that may be handed out to vaults for misbehaving?


@mav, do you remember the testnet that was spammed and filled up. There were a few posts that might help with your questions around that time.

One thing I remember specifically was that a vault was considered full before there was no space left. This was to allow for vaults turning off requiring more chunks to be stored.


It will be killed then and have to start again.

At the moment, yes, but this should not be the case for long.

Yes, well for immutable data it should be. Mutable Data should as well, but can the less distributed. This is a potential tweak though.

Penalties are a critical thing. So far we have pushed vaults to not be able to not do the right thing, but this is only for testnets. Penalties must be invoked when a vault does not do some basic things, such as

  • Pass on a message
  • Give data when it should
  • Vote for a Block in either the chain or in message passing.

There are others as well, but none in code yet. Data chains will probably be the start of penalising where a peer that missed a vote on a block will be killed (probably, but may be relocated --age). The issue is subtle, so we can over penalise, causing avalanche failure, or be too lenient and then open up security holes or at least potential for error.

Bottom line, there will be penalties, but first solid requirements based on the minimal rules so far in terms of voting and message passing.


Hm, on the subject of avalanches, couldn’t this - in the extreme case - bring the whole network to implode? If the network is nearly full and nodes get killed, that means its data fills up some other nodes… and so on.

Probably really hard to actually get there, but still.


About storing every chunk in that section

Another question is how does the section handle widely varing sizes of each node’s vaults. One node could have 1TB and another 10GB


Another element, either in parallel or perhaps after penalties is message passing/swarm. We currently use a swarm pattern to send messages to all peers in a section. Elders send to each other, and to each adult and infant (for many blocks). However, this is not as good as it could/should be in my opinion.

So penalties will be superb to dive into, not too hard, but subtly very important. But a secured gossip type protocol for intra and inter section messaging would be much more efficient (logbasse2)/. When we complete the simulations properly (a lot of changes are required there to match the design) then we will have a great tool to use and test these things.

This secured gossip is an area that this community can get involved with. so @mav @oetyng @tfa @neo and the rest of that gang (so many) can and should be involved. I believe the recent work in the sim and this thread show the community is able and capable to get much deeper involved in design and testing. We cannot ignore this any longer, but we can use this capable resource and increase the debate from us in house alone debating and presenting. It lays us a bit bare, but that is cool, none of this is inability to launch as a ton of options all exist that will work to varying degrees, but is is launching as best as we can as a community of grown up Engineers.


Yes :wink:

Atm the sections would force everyone to get to a minimum required size, but this need not be the case, it is not hard to use better algorithms to spread this load and segment the space requirements.


Very exciting to watch this. :slight_smile:


This is already the case since a long time: only a subset of a section stores a chunk, this subset is called a group and its members are the GROUP_SIZE nearest vaults to the hash of the chunk.


Yes (wow man well spotted) but it is not going to be the case any more when we have group size elders. It can be but its not fully fleshed out. Now we have elders all interconnected, but adults and infants only connected to the elders, but not to each other. Not far away, but I have not went to far there as I believe secure gossip will allow this current situation to remain in place, with e section controlled by elders (routing) and sub groups in the vault “layer”/


This is a good question since it (probably) has an effect on safecoin distribution. The safecoin distribution algorithm aims to balance supply with demand (of storage space) so it’s important to understand both the currently available free space and the likely available free space in the near future. This means if a vault is penalized only when it’s totally full there may be instability compared with penalizing on ‘near future availability’. Can the future [free storage] be predicted?!

Very interesting topic coming out of a simple ‘innocent beginner’ question (ie OP)… classic SAFE.


It could be. One way is for each vault to be told to store specific generated data which fills up the vault. Once the vault reports that it has completed that it is then told to “solve” a “problem” based on the data it was told to store. Only if it was capable of storing that data can it generate the correct solution. This would give a means of proving it at least had the storage at the time of the test.

1 Like