I am happy that you have a number in mind for the setpoint of your spare-capacity-pool. It’s something that’s been on my mind a lot, and I would suggest that 20% is much too low.
The following thoughts lead me to feel that 80% spare-capacity-pool would be closer to the correct number.
System stability. You’re attempting to implement a control system, with safecoin price being the process variable, and MAID spare-capacity-pool being your setpoint. So, if you want 20% spare capacity, and you only have 10% spare capacity, you increase safecoin price to encourage farming and discourage storage, until you obtain your setpoint. Control systems are amazing, once they’re tuned properly, but tuning is difficult, even for electro-mechanical systems. The problem increases exponentially when you add humans to the mix. Ask any economist or legislator. The most elegant systems designed to cause humans to behave in a certain way quickly go awry. We see this in insurance algorithms, where only the sickest people seek the high premium insurance, which alters the assumptions that the premiums were based on. We see this in boom-bust cycles, which is my fear for safecoin/maidsafe storage.
The challenge with maidsafe when it comes to boom/bust cycles, is the commitment of perpetual storage. If I have committed to storing a piece of data perpetually, than I am committing that at no time, will a boom/bust cycle that causes dramatic decreases in available storage, take so much of my available storage offline that I can’t store everything that’s already been stored. That means if I’m storing 5,000,000 terabytes of data (including the 4x redundancy), my available storage needs to never fall below 5,000,000 terabytes, even for an instant. If it falls below that number for even 1 minute, during a flash-crash event where lots of farmers leave, or go offline, than I’ve lost data forever.
So, the answer would appear to me, to be a massively over-damped control system. We see this with interest rate rises, where the interest rates are changed in very measured, very small increments over a long time. This minimizes boom/busts caused by an unstable control system. I note that even this, does not solve them completely.
If you have a very over-damped control system, that responds to changes very slowly, and you have the imperative that you can never, ever, go below a certain storage amount, or you’ll lose data, it seems to me that you need a big buffer of spare capacity. That leads me to believe that 80% spare capacity is closer to the correct setpoint than 20% spare capacity.
Addendum: After re-reading my post, it occurs to me that the chances of all 4 redundant copies of a set of data, all being in the set of removed storage caused by a flash-crash, is unlikely, so there is some opportunity here to be a little more aggressive on reducing spare-capacity-pool. I do not know what the right setpoint is, but am still fearful that 20% is too low.