Exploration of a live network economy

Good thinking. Maybe that could be valuable to include. I wonder what the extra percent would be relative to. So we have a section with 10 nodes (all same storage capacity and filled x %), and 1 leave, would we then see d = x + 10% for that section? And when next node leaves?

I think that sounds right.
Indeed the different valuations included in the model are not entirely clear yet what they actually cover, where they overlap.
What I suggest there is that the number of nodes in the network let’s us both estimate a value of the coin, as well as a value of the network.
In that quote, it’s about why it indicates value of the coin. The thinking here is that when margins of running a vault is high, it would mean that safecoin has high fiat value, since operation costs should be fairly stable. Thus it was an increased safecoin fiat value that motivated the higher node count. That is the assumption.

Yes, total vaults in network. “Few” is relative. Meaning that as n decreases (become fewer) margins are supposed to be lower, and vice versa.
I would rephrase that quoted part:
“When number of vaults decrease, we assume the margins have decreased, and vice versa.” The slight problem might be that it’s actually the derivative of the n curve I think, that would indicate the margin change, not the absolute number of nodes. But over time, if it stays high, margins did not decrease, and vice versa. That’s how it would be captured I think.

Not possible with current code is what I mean. You join and do resource proof at joining time. There is currently no re-evaluation of that afaik. Which means you would need to leave and re-join, which would then half the age.

This feature may well stay in order to keep vault count high relative to vault size.

It is possible to increase capacity by spinning up another vault if that is permitted on the same machine.


And if so, we might hit the wall in form of the disallow rule.

These two factors together constitute considerable friction to increasing storage as a response to safecoin fiat valuation changes.

It would seem to me that this friction is a hinder for R to adapt rapidly. It might build up tension which leads to unintended behaviour.
Will think more about this later.

1 Like

An interesting observation:

We want to study what happens with R at the first couple of section splits.

Initial state:

d = 0.5
u = 0.9
s = 1

We assume no changes in d and u, which could be roughly true with these relatively small changes.

At 100 nodes, R = 0.01557 safecoin.
When we reach n = 200, we have R = 0.00779.
As the section split, we get s = 2, and we have n = 100, and R = 0.01331.

We assume these 2 sections will split at the same time.
With 2 sections, next time we reach 200 nodes, the reward is 0.00665, and right after the split, we have s = 3, and n = 100 again which gives R = 0.01173.

The reward shot up with almost 71 % right after the first split, and 76 % after the second.
This is quite remarkable. But we can also see that the jump is only this significant when number of sections is low. As soon as the neighbor count start to grow, the median node count of the neighbors will not change much when a single section split, and supposedly the neighbors will have a variance in node count and the occurrence of splits will not be clustered.

I think we have detected yet another incentive to create more sections early, which we already before have stated is good for performance and security.

Are there any unwanted consequences from this behavior of R?

1 Like


Last couple of days I’ve been building a network simulation with this model.

TL:DR: Here is an excel with results from simulations.

When performing these, I found that a few things had to be tweaked in current formulas, because the network state just ended up in a corner and hardly moved. So there was obviously something needed.

50 % balance of coin and storage

Having studied several previous suggestions for farming algo, a couple of things has been common to include:

  • Strive for network equilibrium at 50 % issued coins
  • Strive for network equilibrium at 50 % used storage

I decided to introduce some changes to the formulas as to include some of these properties.

First, a recap of some of the variables:

C = StoreCost
R = FarmingReward
u = unfarmed coins percent
d = data storage used

Implementing disallow rule

This does not give us the 50%-used-storage-equilibrium property, but it does influence the the stored percent.
Indiscriminate joining of vaults would keep the d at a constant low value, as the realistic values for daily user traffic, could not meet the realistic values of storage capacity introduced per new user, when modelling for a few fairly likely initial growth curves as hype and adoption kicks in.
What I mean to say with this is that the simulations showed that the way the formula is designed to incentivice user actions, it does not work well to allow all nodes to join and not considering the network’s need for storage.
Basically, this is a part of what the disallow rule is meant to do: only allow new vaults if there is a need for them.
So I introduced a dependency of the growth rate on the storage used d (ranging from 0 to 1).
This was done by setting AllowedGrowthRate:
as A = d * growthRate
If storage used d is low, we allow fewer vaults in - if it is high, we allow more vaults in. A simple adjustment of new nodes allowed, based on the needs for storage.

Implementing issued coins balance around 50 %

As I suspected, the temporary C = 2R, was not working out well.
Although we want C to be related to R, we also want it to be able to fluctuate independently from R.
I have sloppily worked with the assumptions of read-write ratio of the 1% internet rule. There the read-write ratio on social media is estimated to 99:1. As the early network will see a lot of data being uploaded, I used a 80:20 ratio.

The simulations showed that as to be able to drop u from an initial 85 % to a desired 50 %, with a read-write ratio of the above, we must allow C to go down below R, so that there is a net farming making u drop. On the other hand, we must also allow for the reverse, so that if u drops too far below 50 %, it starts to bounce back.
What would be the best parameter to base this factor on, as to properly steer the store cost C?
Again, my choice was to use the variable in question itself - in this case u .

I defined CostWeight as W = (1 / u) - 1, and then updated StoreCost to C = WR.
This would give C = R at u = 50 % (supposedly contributing to a stabilization around this value) and a reversed correlation between C and u.
Giving following properties:

  • Lower safecoin cost of storage when the unfarmed supply is high.
  • Higher safecoin cost of storage when the unfarmed supply is low.

Which supposedly acts to move u towards 50 % - from any direction.


  • CostWeight
    Indeed, we did see a net farming of safecoin (i.e. a decrease of u). After the first rise from 85 % to 93.333 % as the 5 k initial users uploaded their 100 Gb worth of data, it went down to 93.302 % over the span of a year with 180 k vaults and a total of 1.8M users. This amounts to 1.34M safecoin during the first year. However, towards the very end of the year, the daily net farming rate was almost 860k safecoin per day! So it certainly picked up speed (and getting closer to u = 50 % it slows down, as C gets higher). It’s hard to say right now if this rate of change is OK, it would seem initially that it is though.

  • AllowedGrowthRate
    This did improve the outcome with regards to data stored percent d. By the end of the year it had reached 28.7%, and obviously also a much smaller network. By this we also saw a much higher farming reward R, since it grows with the square of d, while at the same time it decreases with the growth of total nodes. This in fact also contributed to the steeper decrease in u mentioned above, since the R was taken from u at a higher rate.

Potential risks of this change

  • CostWeight
    If we imagine a scenario where we are at 50 % unfarmed coins, and network has stabilized in growth with few or no new vaults joining. Imagine that at this point less new data is uploaded for some reason, maybe the fiat price is too high and the network is not able to adapt properly…
    Still there is a lot of reading of existing data. This would eventually get u to drop. But as u drops, C increases, making it even less attractive to upload data, as it got even more expensive in fiat terms. Is this a real risk? Can it be avoided?

  • AllowedGrowthRate
    The disallow rule as it was implemented, certainly highlighted the dilemma: It is hard to let the network grow fast, if we are also to stay above some very low ratios of data storage used. The risks and difficulties with a small network are probably well known, so I won’t repeat them. From the perspective of farming reward only though: If we are going to have the farming reward depend on data storage used there has to be a disallow rule to keep d at a high enough level. The farming reward will diminish otherwise and the whole economy slows down to a stop.

Additional comments

I would like to make clear again, that the current model I’m working on, is just an initial starting point. I think it is probably not as simple as it could be, and that there is a significant risk of over fitting when using as many variables as we do now. There is also potentially the increased number of ways for the system to get thrown out of balance, when we increase the complexity of the model. We will - in time - try out different ones as well.


Really nice work here. My only suggestion is that a farming reward rate based off a pid controller would likely handle some of the issues you are seeing much better. It would also allow the network to better handle sharp spikes in demand and offer an improved dynamic response.


Aaah… Good suggestion! Thanks!
Will look into how I can implement this.

1 Like

Should not W be like this?
So for u=0.85 W=5.67
And for u=0.01 W=0.01
As u are unfarmed coins and not farmed coins.

Also there was R=2C and not C=2R.

The network is not supposed to rely on time or synchronization. Is stable PID control possible without?

Nopes, and nopes :slight_smile:

We wanted lower C when u is high.
u = 0.93 gives W = 0.075.
C = WR then gives a low C.
With u = 0.21, W = 3.76. So when supply of unfarmed is low, store cost is high. Thus unfarmed is replenished, since the C spent on PUTs, is recycled into u.

C = 2R was the initial design of this model. You can see it in the first post in this topic

The original separated store cost from farming rewards in respect to the amount of unfarmed coin

I would suggest that there was merit in doing this because by making store cost high when unfarmed coin is low only discourages the storing of data more and more. Most farmed data will be data stored more recently. @ 3 months access is noticeably less, so to discourage storing when there is plenty of free space but low unfarmed coin is only going to make the situation worse.

1 Like

Mm yes, I listed this as one possible issue in the section on risks with the introduced changes.

I cannot tell now though if the system will even reach the state where that situation occurrs. It was a quite extreme situation.

Anyway, with this model that I am currently exploring, u didn’t move down before this change. With the change the output was a lot better.

It’s basically this what I said here:

I haven’t seen any simulations on the original models you mention (don’t think anyone has done it), so don’t know if they would differ in that aspect.

Read-write ratio might increase from 80:20 eventually (maybe will be much higher from start) and go closer to the 99:1. This would lower the recycling which would help bring u down.

But the easiest thing to keep the pie and eat it, is to not increase C as u goes below 50%. Only decrease as u goes above that.
So, there we both have the cheaper PUTs when there’s plenty unfarmed coins, moving us towards 50 % farmed, as well as not feeding a negative spiral when increasingly discouraging PUTs as u goes below 50 %

We’ll no longer have that upwards effect bringing it back to 50 % from lows then, but that could perhaps be done some other way.

The previous was rather self evident in that as the issued coin increased the farming rewards were reduced by the ratio of issued/total. The store cost was unchanged by the issued ratio and the rewards would only be significantly reduced in a matured network.

The only aspect not modeled was the effect on fiat cost. But seeing as no one has a good model for that I consider the original algorithm relating to issued coin was rather self evident.

Just one thing, which is the original you refer to? RFC57, or other?

the original #12 from memory

Basically the reward is tried against the coin address and only issued if the address was non-existant

This was effectively
reward * (non-issued/total) for farming rewards


Yep, I know, just wasn’t sure which you were referring to.

1 Like

Parsec consensus events can serve as the tick, tock of a clock. This “network time” would serve the purpose.


It would be interesting to figure out if it would be fine to consider all events equal, or if some types would be like a “quarter tick” etc…

1 Like

Not an expert, but it would be good to consider the effect of irregular ticking (jitter) on stability.
Edit: And why is difficulty adjustment in blockchains so simple and basic? Any using a PID?

1 Like

Comparison with RFC0012

OK, so, since I devised a generic simulation, where FarmingAlgo base class can be inherited by any implementation (it exposes FarmingReward() and StoreCost() methods), I went ahead and put RFC0012 into my simulation.
Everything in the documentation is outdated (and the RFC is rejected), but I have updated it so that it can be applied in the current network (thus using the exact same simulation of the network).


First of all, getting this part modeled:

1. Get request for Chunk X is received.
2. The DataManagers will request the chunk from the ManagedNodes holding this chunk.
3. The ManagedNodes will send the chunk with their wallet address included.
4. The DataManagers will then take the address of each DataManager in the QUORUM.
5. This is hashed with the chunk name and PmidHolder name.
6. If this  `result`  % farming divisor (modulo divides) yields zero then

All of these parameters are basically random (the addresses of the DataManagers , the chunk name, the PmidHolder name). The hash of it will also be random.
So to simulate this I can just take a random value. But it has to have the same distribution of an XorName as to follow the design, so ‘result’ in step 6 is just a random 256 bit array.

(Btw, does anyone know the statistical chances of yielding zero at step 6 ? I would say it is flaw of this model if this has been chosen not knowing the statistical outcome of zero. It is then more hoping that it will play out well, than actually engineering it.)

An additional change is that since I am using the newer coins as a balance implementation, they are divisable and we can skip modelling a random access to an address in the xor space to see if there is or is not a coin there.

(Which would be like this..)
if (network.UnfarmedCoins > (decimal)StaticRandom.NextDouble())
    return Coins.One;

We can just do

return network.UnfarmedCoins * Coins.One;

In a very small network, the latter gives a more even distribution than the former would. In a larger network, it shouldn’t make much difference.

Farming rate

Farming rate FR is defined as:

if TP > TS {
    FR = 1 - (TS / TP)
} else {
    FR = approximately 0


  • Farming rate == FR (0 < FR <= 1)
  • Farming divisor == FD (FD >= 1)
  • Total primary chunks count == TP (TP >= 0)
  • Total sacrificial chunks count == TS (TS >= 0)

I have interpreted the implementation of sacrificial chunks like this:
If nothing is stored on the vault, it is filled with sacrificial chunks. For every primary chunk stored, a sacrificial chunk is removed.

This gives that the condition TP > TS is equivalent to storage percent used > 0.5m .
As the simulation uses a simplified model, we assume every vault has the same amount stored, and equal chance of getting data, giving that each vault’s storage percent used, is the same.
Hence in my implementation:

var fr = network.PercentFilled > 0.5m ?
                0 : 1 - network.PercentFilled;

which should be equivalent to the RFC0012 definition.


In RFC0012 it is defined as:

StoreCost = FR * NC / GROUP_SIZE


Vaults can query the total number of client (NC) accounts (active, i.e. have stored data, possibly paid) Vaults are aware of GROUP_SIZE

Active accounts was a bit hard to model here. So I went with node count. Group was at that time equivalent to today’s section IIRC, so by that NC / GROUP_SIZE would be section count in today’s network?

That’s what I went with anyway.
So, my implementation:

var sc = fr * network.Sections.Count;


I ran it with the exact same parameters as with the other farming algos, same growth rate approximation etc. There was only one line of code to change after I had implemented the farming algo class.


Already at seeding the network, when the initial 5k users is about to upload their data, the system balance is overthrown, as we recycle more than we farm, and there is max supply overflow. The network economy doesn’t even get on its feet.

Naturally, I might have misunderstood something in the implementation. Additionally, maybe some other precondition is required for it to be able to survive. But then comparisons would be less accurate, so I didn’t try to find exactly what conditions besides from these estimated ones, that it would survive.

However… I think overall when working with this suggestion, that it is just a very rudimentary placeholder crafted a very long time ago, to have something to iteratively work with. (Something also mentioned in the RFCs).

I did not see that it was specifically apt to solve the challenges I have found in this domain.

So, as to get back to the actual observation about the risk for a negative spiral, which was the reason RFC0012 was brought up and tested, I am going to try this part out: