The Safe Network's economics

How does an investor resell SAFE storage space to other people? I thought transfer of storage space can only happen through the network mechanisms, which absorbs the SafeCoins rather than transferring them to space owner.

Anyway, if possible, I’d much prefer a pay-on-put system that disables people from hoarding space when the price is cheap. Example:

When you have no storage credit, and you put a file of 1MB on it, you automatically pay 1 SafeCoin (or whatever the smallest unit will be) for let’s say 1 GB credit. 1 MB is subtracted for the current PUT, you have a credit of 999 MB of storage space left. Next time you PUT a file, it will be subtracted from your storage credit and you don’t pay SafeCoin. Only when you run out of storage credit for a full PUT are you charged SafeCoin again. It won’t be possible to get more storage credit above what you get for 1 SafeCoin.

If there is one thing I learned, people will find a way when there is money to be made.

My first thought would be to buy blocks of storage with generic accounts. For example: create 100 accounts and buy 1Tb of storage each. Then just sell the login and pw to the new user. People sell max level characters on MMORPG.

Or create your own middleman software/server to facilitate it. They log into your server, buy storage, and your server sends it through to one of your generic accounts.

I think pay-on-put is very ideal. But according to @dirvine it will be done in blocks, so like 10Gb block or something like that.

Ok, so it appears that nothing is set in stone as of yet. I’m glad that a network utilization model is being discussed because to me this is the most feasible option.

I agree with you there, direct swap does not seem like it would work, even though it sounds reasonable, there’s no way to guarantee how your node will act in the future. Make the swap, upload your data for life, stop farming. To easy to game.

I completely agree with you there. We are going to see a massive farming push from day one, and data storage costs will be driven to essentially zero (given a network utilization model).

Well thought out. I believe the network will be quite capable in sustaining itself on PUT payment alone in the future.

Why can’t pay-on-put be implemented?

I’m sure the devs have a few actual algorithms laid out, but nothing is certain yet. It will be matter of experimentation during test net 3 and BETA.

One important consideration is that the network should aim to maintain a certain amount of free space in case major parts of the network go offline causing big churn events, either due to a natural disaster or a major attack to destroy the reliability of the SAFE network by causing data loss that way.

However, for the sake of efficiency and cost-effectiveness you want to make as much use of the available space as possible, so a balance will need to be found. My wild guess would be something between 50% and 80%.

I’m pretty sure it can be implemented, and probably will be. I’m not 100% clear on what the current plan is. It may actually be exactly this.

1 Like

cool, cool, I guess the reason why I couldn’t find the answers is because they haven’t been decided upon yet!


It is pretty much pay on put. You charge up your space account with safecoin and will be told how much space is left based on current prices (so if you leave it and store nothing your space will likely decrease). So each store will incur a cost at that time on the network. The network knows the cost of all puts in real time. I assume space will decrease in cost very fast and there will be an opportunity for your client to pay frequently rather than in larger chunks, thereby keeping your cost per Mb as low as possible.

The network aims for 20% over supply and will flatten out farming over that amount, as farmers reduce or space required increases then the network increases farming rate to get back to 20% above average.


I’m so glad to hear this part. “Use it or Lose it” approach. This will discourage hoarding.

1 Like

So what about 100 million user? The network still knows cost at all time? And that’s cost in Safecoins I guess? That’s like another mechanism I have to learn than :wink:
(*dreaming about writing a post on “All the economic layers for Safecoin”)

Yes, the cost is calculated based on the used:free space ratio of the close group, with additional requests from other (random?) groups for this ratio. Standard (and proven) statistical methods are then used to make sure there is a minimum deviation from the network average in the PUT price. So it’s not like the entire network will be polled, but there will be enough measurements to come extremely close to the factual network average (which is practically impossible to know) using math. This way there are no scaling problems at all.


@dirvine Can you point us to information on this?


Is it true that there are two stages of probability as @kirkion suggests. I was imagining the probability was only as to whether the Safecoin address was un-minted (second stage above), which provides a simple an efficient way of making farming harder as Safecoin are used up.

The other area of probability I’m aware of is related to rank - performance of a node translate into rank, which in turn increases its chance of hosting a given chunk.

So I am aware of two areas. Are there more, such as @Kirkion suggests?

Another way to look at it is that the data on the network is stored randomly, but statistically very evenly, over the entire address space. Vaults are created in a similar fashion, i.e., randomly but evenly across the address space. Therefore, if all vaults had an abundance of disk space, all vaults would have about the same amount of data stored on them at any moment.

So, even without doing any polling of other groups, a group of 32 vaults would represent an acceptable reflection of the state of the whole network. I guess an anomaly could occur in a group from time to time, but the churn of the network will prevent even that from lasting too long or causing a problem.

Even if at the moment when my vault attempts to farm a coin, I might be a bit more lucky or less lucky than I ought to be, it still evens out in the overall scheme of things.

1 Like

Except that vaults have different rank, which determines their chance of being offered chunks. However, given knowledge of the relationship between rank and storage usage, I guess there’s a way of using properties of the network to provide a statistically useful PUT price function. Waiting to hear… :slight_smile:

Yes, but even that will tend to reflect fairly evenly across a churning group of 32 nodes. So, again, the value a particular group comes up with at any moment may not be exact, but it will cycle around the average. Probably good enough to be equitable to everyone involved.

I hadn’t thought of that, but evenso wonder if 32 will be a big enough sample. Someone who can do maths needed :smile:

1 Like

In general 32 is a rather small sample with a pretty significant margin of error. Definitely above 10%. Though this is not a general case of course. Anyway, this topic was discussed before and the devs confirmed multiple methods to minimize the margin of error, so it’s not at least nothing to worry about, they’ve got it covered.


I am happy that you have a number in mind for the setpoint of your spare-capacity-pool. It’s something that’s been on my mind a lot, and I would suggest that 20% is much too low.

The following thoughts lead me to feel that 80% spare-capacity-pool would be closer to the correct number.

System stability. You’re attempting to implement a control system, with safecoin price being the process variable, and MAID spare-capacity-pool being your setpoint. So, if you want 20% spare capacity, and you only have 10% spare capacity, you increase safecoin price to encourage farming and discourage storage, until you obtain your setpoint. Control systems are amazing, once they’re tuned properly, but tuning is difficult, even for electro-mechanical systems. The problem increases exponentially when you add humans to the mix. Ask any economist or legislator. The most elegant systems designed to cause humans to behave in a certain way quickly go awry. We see this in insurance algorithms, where only the sickest people seek the high premium insurance, which alters the assumptions that the premiums were based on. We see this in boom-bust cycles, which is my fear for safecoin/maidsafe storage.

The challenge with maidsafe when it comes to boom/bust cycles, is the commitment of perpetual storage. If I have committed to storing a piece of data perpetually, than I am committing that at no time, will a boom/bust cycle that causes dramatic decreases in available storage, take so much of my available storage offline that I can’t store everything that’s already been stored. That means if I’m storing 5,000,000 terabytes of data (including the 4x redundancy), my available storage needs to never fall below 5,000,000 terabytes, even for an instant. If it falls below that number for even 1 minute, during a flash-crash event where lots of farmers leave, or go offline, than I’ve lost data forever.

So, the answer would appear to me, to be a massively over-damped control system. We see this with interest rate rises, where the interest rates are changed in very measured, very small increments over a long time. This minimizes boom/busts caused by an unstable control system. I note that even this, does not solve them completely.

If you have a very over-damped control system, that responds to changes very slowly, and you have the imperative that you can never, ever, go below a certain storage amount, or you’ll lose data, it seems to me that you need a big buffer of spare capacity. That leads me to believe that 80% spare capacity is closer to the correct setpoint than 20% spare capacity.

Addendum: After re-reading my post, it occurs to me that the chances of all 4 redundant copies of a set of data, all being in the set of removed storage caused by a flash-crash, is unlikely, so there is some opportunity here to be a little more aggressive on reducing spare-capacity-pool. I do not know what the right setpoint is, but am still fearful that 20% is too low.

I think an over damped system reduces the risk of going below a threshold, so damping isn’t easy to correlate with the required margin.

Certainly its a heavily damped system - at least for farmers who care about farming rates - because if they leave the network for long, they lose rank. Such decisions will therefore be taken over a longer term view (damping changes). Users who don’t care about farming rates will probably dominate in terms of numbers (average users who utilise spare space on existing machines rather than dedicated equipment that is drawing power, just for farming). This adds further damping/stability.

How you model this is beyond me! But I’m not getting why your logic leads to 80%, or how to judge 20% adequate for that matter.

I guess part of this will be testnet 3 analysis. Do we have any models?

EDIT: re your edit. There are generally 4-6 (I think) live copies and even more offline copies, so it may be even less critical than you think.

1 Like

Nice. This is the kind of confounding that humans exert when they become part of a control system. In this case, it’s actually working in our favor. That’s a very nice stabilizing influence I hadn’t thought of.

1 Like

Here’s what we do know. People move in mass events: flash crash, network migrations, viral trends. There’s no way to prevent this, short of mind control.

So the next best thing is… create and maintain strong incentives to “keep” farmers dedicated while “attracting” new farmers. The idea is to have continuous growth. If there is a systemic event, the fallout will be less catastrophic compared to a system that already diminishes incentives.

As @happybeing pointed out, maintaining farm rank is very important on several levels, including higher income. It would be like an employee who stops coming into work. They run the risk of getting fired. But if the cost of coming into work is more than their paycheck, they will quit or go bankrupt.

Psychologically speaking… if I farm a terabyte am I likely to get coin equal to a terabyte back at any given time? Based on the quality of the storage and its parameters I might plausibly get more, but in terms of comparable useability not much more as that would be a clear problem? If I got less, it would presumably be not much less, even though I know I am getting the utility of the network as a fair increment to the value calculation?

Finding away to explain this in marketing terms is important. If you push 3 rabbits into one side of the commissary you better get at least two rabbits worth of goat out the other side etc… people don’t to hear about half a rabbit worth of goat for three rabbits in. To me SAFE is more than the sum of its parts in a magical way that will be hard to calculate. Push 3 rabbits in and get a goat and some diamond dust out the other side.