The Safe Network's economics

Except that vaults have different rank, which determines their chance of being offered chunks. However, given knowledge of the relationship between rank and storage usage, I guess there’s a way of using properties of the network to provide a statistically useful PUT price function. Waiting to hear… :slight_smile:

Yes, but even that will tend to reflect fairly evenly across a churning group of 32 nodes. So, again, the value a particular group comes up with at any moment may not be exact, but it will cycle around the average. Probably good enough to be equitable to everyone involved.

I hadn’t thought of that, but evenso wonder if 32 will be a big enough sample. Someone who can do maths needed :smile:

1 Like

In general 32 is a rather small sample with a pretty significant margin of error. Definitely above 10%. Though this is not a general case of course. Anyway, this topic was discussed before and the devs confirmed multiple methods to minimize the margin of error, so it’s not at least nothing to worry about, they’ve got it covered.

2 Likes

I am happy that you have a number in mind for the setpoint of your spare-capacity-pool. It’s something that’s been on my mind a lot, and I would suggest that 20% is much too low.

The following thoughts lead me to feel that 80% spare-capacity-pool would be closer to the correct number.

System stability. You’re attempting to implement a control system, with safecoin price being the process variable, and MAID spare-capacity-pool being your setpoint. So, if you want 20% spare capacity, and you only have 10% spare capacity, you increase safecoin price to encourage farming and discourage storage, until you obtain your setpoint. Control systems are amazing, once they’re tuned properly, but tuning is difficult, even for electro-mechanical systems. The problem increases exponentially when you add humans to the mix. Ask any economist or legislator. The most elegant systems designed to cause humans to behave in a certain way quickly go awry. We see this in insurance algorithms, where only the sickest people seek the high premium insurance, which alters the assumptions that the premiums were based on. We see this in boom-bust cycles, which is my fear for safecoin/maidsafe storage.

The challenge with maidsafe when it comes to boom/bust cycles, is the commitment of perpetual storage. If I have committed to storing a piece of data perpetually, than I am committing that at no time, will a boom/bust cycle that causes dramatic decreases in available storage, take so much of my available storage offline that I can’t store everything that’s already been stored. That means if I’m storing 5,000,000 terabytes of data (including the 4x redundancy), my available storage needs to never fall below 5,000,000 terabytes, even for an instant. If it falls below that number for even 1 minute, during a flash-crash event where lots of farmers leave, or go offline, than I’ve lost data forever.

So, the answer would appear to me, to be a massively over-damped control system. We see this with interest rate rises, where the interest rates are changed in very measured, very small increments over a long time. This minimizes boom/busts caused by an unstable control system. I note that even this, does not solve them completely.

If you have a very over-damped control system, that responds to changes very slowly, and you have the imperative that you can never, ever, go below a certain storage amount, or you’ll lose data, it seems to me that you need a big buffer of spare capacity. That leads me to believe that 80% spare capacity is closer to the correct setpoint than 20% spare capacity.

Addendum: After re-reading my post, it occurs to me that the chances of all 4 redundant copies of a set of data, all being in the set of removed storage caused by a flash-crash, is unlikely, so there is some opportunity here to be a little more aggressive on reducing spare-capacity-pool. I do not know what the right setpoint is, but am still fearful that 20% is too low.

I think an over damped system reduces the risk of going below a threshold, so damping isn’t easy to correlate with the required margin.

Certainly its a heavily damped system - at least for farmers who care about farming rates - because if they leave the network for long, they lose rank. Such decisions will therefore be taken over a longer term view (damping changes). Users who don’t care about farming rates will probably dominate in terms of numbers (average users who utilise spare space on existing machines rather than dedicated equipment that is drawing power, just for farming). This adds further damping/stability.

How you model this is beyond me! But I’m not getting why your logic leads to 80%, or how to judge 20% adequate for that matter.

I guess part of this will be testnet 3 analysis. Do we have any models?

EDIT: re your edit. There are generally 4-6 (I think) live copies and even more offline copies, so it may be even less critical than you think.

1 Like

Nice. This is the kind of confounding that humans exert when they become part of a control system. In this case, it’s actually working in our favor. That’s a very nice stabilizing influence I hadn’t thought of.

1 Like

Here’s what we do know. People move in mass events: flash crash, network migrations, viral trends. There’s no way to prevent this, short of mind control.

So the next best thing is… create and maintain strong incentives to “keep” farmers dedicated while “attracting” new farmers. The idea is to have continuous growth. If there is a systemic event, the fallout will be less catastrophic compared to a system that already diminishes incentives.

As @happybeing pointed out, maintaining farm rank is very important on several levels, including higher income. It would be like an employee who stops coming into work. They run the risk of getting fired. But if the cost of coming into work is more than their paycheck, they will quit or go bankrupt.

Psychologically speaking… if I farm a terabyte am I likely to get coin equal to a terabyte back at any given time? Based on the quality of the storage and its parameters I might plausibly get more, but in terms of comparable useability not much more as that would be a clear problem? If I got less, it would presumably be not much less, even though I know I am getting the utility of the network as a fair increment to the value calculation?

Finding away to explain this in marketing terms is important. If you push 3 rabbits into one side of the commissary you better get at least two rabbits worth of goat out the other side etc… people don’t to hear about half a rabbit worth of goat for three rabbits in. To me SAFE is more than the sum of its parts in a magical way that will be hard to calculate. Push 3 rabbits in and get a goat and some diamond dust out the other side.

Every chunk will be online on like 4 machines, random across the network. If 1 machine goes down, within like 10 sec. (I believe it was that number) another copy is created and stored. So even if the internet goes down in the United States, there’s a big change there will be a copy somewhere in Europe or Asia and within 10 sec. a lot can be done. Most of the files will have like 16 copies (including available offline), so even if there’s not enough space to store all the new copies, the price of Safecoin will go like insane and farmers in the EU will connect everything with a hard drive to make money :slight_smile:
A great number of files will be “stored” in the caching as well. So intermediate nodes will just hold these if there’s no more space available, or of they’re requested. This way popular files are not only stored but also cached. In the worst case scenario we have to wait for the US to come back online. But I truly believe this network will be on all the continents, so not a big change that like half of the network will be offline at one point.

If it’s a sudden massive disconnect there will be data loss regardless. Even with 3-16 copies of every chunk, if a hundred terabytes of chunks (100 million chunks) go offline suddenly, there will assuredly be some data of which all the copies were in those lost terabytes. The law of big numbers works against us as well in that regard!

And the data-loss stories will dominate the media, not the 99.999% of chunks that survived. But, creating a higher margin of free space doesn’t defend against that. Only increasing the copies of chunks would reduce that risk.

Yes, definitely, for two reasons. Storage space value is highly inflationary, and SafeCoin should on average become more valuable over time.

Prices can stabilize assuming high transaction volume; and be backed by the storage; Storage space and computational power eventually will be the base denominator of a safecoin; the transfer of wealth mechanism will become viable as a staple utility when transaction reaches the equivalent of and others similar to this:

Your article is the #1 that I’ve been showing to everybody who’s interested in MaidSafe, and I totally loved it! Was incredibly powerful.

It’s exciting to see you here! Never thought I’d get the chance to thank you for writing it!

Keep up the great work!

1 Like

Great to get an answer on this. Looking forward to seeing how this magic number serves the network.

@whiteoutmashups Thank you! It’s great to know people find it helpful.

2 Likes