Really interesting idea.
I’m gonna go full concept here so if it’s crazy whatever…
‘Extra’ chunks could be stored where the location is
hash(previous name). This could keep going until the entire network was full. This means that lookuptables aren’t needed.
Then to recover the original chunk you could check a lot of possible ‘future’ locations until it seems unlikely that it would have been replicated that far. Even if replication location 1 doesn’t exist, maybe replication location 2 survived, or location 3 etc, with ever decreasing chance of survival.
It also gives an interesting way to measure spare space by checking the number of extra copies available.
It would be possible re-assess the spare space periodically (eg every time elders changed) by getting a random chunk (eg the take a hash of the periodic event then find the closest chunk in the section by xor distance) then see how many copies exist. This would give a fair measure of spare space when averaged through time.
The idea of sacrificial data is not new but the idea that the network would always be full is new and I think it’s pretty interesting. There are probably lots of ways to make it work but the question is would it be efficient enough or better than alternative ideas. Maybe it’s so secure that it becomes wasteful?