Exploration of a live network economy

The other day, just as I was about to pause this exploration for a while, I got some inspiration by something that @19eddyjohn75 wrote here.

Two things actually - one of them a big change to farming reward.
It’s funny, the other of them is also touched on by @mav there now:

When thinking about the bigger idea that the post spurred (about dramatically changing the way farming reward works), one thing I realized is that read-write ratio basically says how many GETs there are per chunk uploaded. At next instance it might be something else, but in a large network it should not fluctuate heavily. So at that moment, 1 PUT should cost [read-ratio] x farming reward, as to cover all GETs prognosticated for it.

For example, if there are 98 % reads, the store cost should be 50x farming reward (well, the ratio is 98:2, i.e. 49:1 and that means 49x R infused in every C.).
This I currently think is the most natural method for knowing what store cost should be.

It can be weighted with inverse proportionality to unfarmed coins as well, as to enforce a gradual flattening of the decline in unfarmed supply curve, approaching a zero derivative. (I think now btw, that balance of unfarmed should be closer to 10 % than 50 %). This I have done in simulations just now.

So, to the main thing that @19eddyjohn75 's post spurred in my thinking:

It seems to me that algorithmically determining farming reward, based on parameters available within the network (where storage scarcity would seemingly be the most important factor in all proposals), cannot follow the real fiat value of safecoin as agilely as we are used that electronically interconnected markets do. We have an inertia in the form of joining and leaving of nodes, which additionally is a dampening and fuzzying indirection in the price discovery, as it tries to express all its value through its value in terms of storage.

Just as I was about to pause the economy simulations, I got an idea for doing this radically different. I’m still working on the details, but so far I’ve done this little write up. It’s just started and I had planned to write a lot more (and refine unfinished thinking) before posting, but I’m about to do some other stuff now, so best to just put it out there so others can start think about it as well :slight_smile:

I’ll start with a nice chart, from the latest simulation of 53 years. It took more than 24 hours to finish.
It employs a model for vault operators bidding on farming reward price, with some weight added for storage and coin scarcity. Store cost is calculated based on the read-write ratio, as per the above description, additionally weighted by coin scarcity.


End size of network: 7million vaults and 50+million clients.


A new take on farming rewards

Economy aims

When designing an economy, we need to define desired properties of it.
In the work with the economy models, we have so far discerned these desired properties:

  • Supply of storage should allow for a sudden large drop of vault count, and thus a margin of about 50 % is desired.
  • Supply of unfarmed coins should allow for the network to adjust costs and payouts, and thus a margin of about 10-20 % is desired.
  • The balance of storage supply should be reached as soon as possible.
  • The balance of the unfarmed coins supply, should be reached in a timely manner, but not too fast. Not sooner than 5 years, and no longer than 20 years.
  • Store cost should reflect the value of the storage.
  • Farming reward should reflect the value of serving access to data.
  • The economy should be able to incentivise users to provide more storage when needed.
  • The economy should be able to incentivise users to upload data when there’s plenty of storage available.
  • The economy should be able to incentivise rapid growth as to secure the network
  • The economy should be able to allow users to quickly act upon the incentives, thus swiftly reaching the desired outcome.
  • The economy should be as simple as possible, and not require any special knowledge by users, for normal usage.
  • The economy should not be easily gameable.

Vault pricing

The most important part is not to bring the large scale operators out of the game, the most important part is to keep the small scale operators in the game.

Because we still need the large scale operators, or at least it has not been shown that they are not needed, and so we cannot assume they are not.

Large scale operators might be able to provide the network with more bandwith and speed, and they should be rewarded for stabilising the network with those resources.
However, we also want to emphasize the incentives given to decentralisation, and that is done by allowing

  • Vaults to set the price of reward
  • Equal share of payment to the lowest price offer as well as fastest responder.

An additional benefit of this, is that we have internalised and reclaimed the market valuation of storage. It is now done directly by each and every individual vault.
The problem of how to scale safecoin reward in relation to its market valuation, has by this been overcome. There is no need for the network to have a predefined price algorithm that both take into account the potentially very large ranges of fiat valuation of safecoin, as well as the inertia in allowing new vaults in.

Price adaptability

A coin scarcity component will influence store cost, as to give a discount when there is still a lot of unfarmed coins. Gradually, as the portion of unfarmed decreases, the store cost will increase and first approach farming reward, and eventually surpass it. Previously, it was thought that due to expectancy of read write ratio being very high, it would not likely be the store cost itself that prevented depletion of network coins. Instead, when the idea emerged of the relation of reads to writes being significant to store cost, it was thought that the market forces would be doing this as scarcity grows and fiat valuation of safecoin grows. This would allow vault operators to lower the safecoin price while still running at a profit. The effect of this is that farming rewards would be smaller and smaller in safecoin terms, as the scarcity and valuation increases, and supposedly the unfarmed supply would then be farmed in smaller and smaller chunks, thus never completely running out.
(However, it is possible to add the coin scarcity component also to R, as to reward more when much is available, and less when when less is available.)

Reads to writes, the key to actual store cost

The proportion of reads to writes, is essentially the number of times any given data will be accessed. If read-write ratio is n:1, it means that every uploaded chunk is accessed in average n times.
For that reason, if every access is paid with R from the network, every piece of stored data should have the price n * R, where n is the number of writes per read at the time of upload.

A read write ratio, basically says how many times any given piece of data is expected to be accessed during its lifetime – as of current situation. It is then natural, that for store cost C to be properly set relative to farming reward R, it must set to the expected number of accesses for a piece of data, times the reward for the access.
By doing this, we enable balancing the supply of unfarmed coins, around some value. This is possible because when we weight the store cost according to read-write ratio, we ensure that payment from, and recycling to the network, happens at the same rate. All that is needed is to keep an approximately correct value of reads and writes done. In a section, it is perfectly possible to total all GET and PUT requests, as to have the read-write ratio of the specific section. When some metrics is shared in the BLS key update messages, we can even get an average from our neighbors, and by that we are very close to a network wide value of read write ratio.

Where this balance ends up, is a result of the specific implementation. It can be tweaked as to roughly sit around some desired value, such as 10 % or 50 %.

Store cost

[Coming up]

Calculating R

G = group size = 8

R

  • 20 % to fastest responder
  • 20 % to lowest price
  • 60 % divided among the rest (6 out of the 8), according to some algo

Setting price of GET:

1: p = Lowest price among the vaults in the group.
2. a = Median of all neighbour sections price (which are received in neighbor key update messages).
3. f = Percent filled
4. u = unfarmed coins
Like so:

R = 2 * u * f * Avg(p, a)

Tiebreaker among multiple vaults with same lowest price:

  • Reward the fastest responder among them.

Example:

Section has 145 vaults.
Fastest vault has price of 200k nanosafes per GET.
Cheapest 3 vaults has price of 135k nanosafes per GET.

At GET, the price is set to 135k nanosafes.

  • 0.2 * 135 = 27k goes to the fastest vault.
  • 0.2 * 135 = 27k goes to the fastest of the 3 cheapest vaults.
  • 0.6 * 135 = 81k is divided among the rest in the group according to …. algo.
    If split even, that means the remaining 6 (out of 8) gets 81k / 6 = 13.5k nanosafes each.

Data is uploaded to the section.
Last GET was rewarded at R=135k nanosafes.
Store cost C is then a proportion of R, determined by coin scarcity in the section.
If unfarmed coins u is 70 %, then cost multiplier is:

m = 2 * (1 – u)^2

and store cost is:

C = m * R

New vaults

A new vault joining a section, will automatically set its R to the median R of the section.
Using the lowest bid is not good, because then you immediately remove the farming advantage of price pressuring vaults, which would mean that they don’t profit from lowering their price, as they immediately and constantly get competition from new vaults joining, and result is probably that they just get lower reward than before, and for that reason they have nothing to win on pressuring the price downwards. So, best would be to let new vaults default to the median, as to get them in at an OK opportunity for rewards, but not pulling the rug from under the price pressuring vaults. This way, the incentive to lower the price is kept, as they are by that more likely to receive the bigger part of the reward. Additionally, new vaults will also have an OK chance of being the cheapest vault for some of the data it holds, without influencing the price in any way by mere joining. They simply adapt to the current pricing in the section. Any price movers among the vaults, would influence by employing their price setting algorithms. This way, we don’t disincentivise members of the section to allow new vaults in – which would be the case if that statistically lowered their rewards.

This means that no action is required by new vault operators. However, advanced users can employ various strategies, anything from manual adjustment, to setting rules ( as in for example - naïvely - R = cheapest – 1), feeding external sources into some analysis and outputting into the vault input etc.

Every time a vault responds to a GET, it includes its asked price.
The price used for reward of a GET is however always the from the most recent established GET, as to not allow a single vault to stall the GET request.

Example:

(GET 0 is the first GET of a new section)

GET 0:

Vault A ; response time: 20ms, price: 43k
Vault B ; response time: 25ms, price: 65k
Vault C ; response time: 12ms, price: 34k
Vault D ; response time: 155ms, price: 17k
Vault E-H: ….
Reward: Most recent GET from parent before split (or [init reward] if this is first section in network). Say, for example 22k
Next reward: 17k
Fastest vault: C
Cheapest vault: D
R_c = 0.2 * 22 = 4.4k nanosafes
R_d = 0.2 * 22 = 4.4k nanosafes
R_ab_eh = 0.6 * 22 / 6 = 2.2k nanosafes

GET 1:

Vault A ; response time: 24ms, price: 45k
Vault B ; response time: 22ms, price: 63k
Vault C ; response time: 11ms, price: 37k
Vault D ; response time: 135ms, price: 15k
Vault E-H: ….
Reward: 17k
Next reward: 15k
Fastest vault: C
Cheapest vault: D
R_c = 0.2 * 17 = 3.4k nanosafes
R_d = 0.2 * 17 = 3.4k nanosafes
R_ab_eh = 0.6 * 17 / 6 = 1.7k nanosafes

Vault operator manual

When setting the price of GETs, the operator doesn’t really have a clear correlation between the number set, and the resulting reward.
Let’s say the operator has the lowest offer, then it will win every GET, and be rewarded with 20 % of the R calculated for it (assuming it is not also the fastest responder). As R is dependent on coin and storage scarcity this could be wildly different numbers in different times. An operator offering storage for 1000 nanos per GET would receive 200 nanos if 100 % storage was filled and 100 % coins issued. If on the other hand 50 % storage filled and 50 % coins issued, the operator would receive 50 nanos. In other words, it is likely that the number entered in the settings, is quite different from the resulting reward, which makes this configuration less intuitive.

The number to be entered - to the operator – is practically just some random number.
However, as the vault joins the section, it will have guidance on what number is reasonable. The operator then only has to worry about adjusting in relation to that. Such as, set price to x % of median section price at time T. The x % could for example be the price movements on a chosen exchange since time T.

Game theory

Winner’s curse

The risk of Winner’s curse is not certain, but it could be argued that vault operators will try to outbid eachother by repeatedly lowering the price, beyond reasonable valuation, to the detriment of all.

Is there a Nash equillibrium?

The low cost home operators might have incentive to lower the price to virtually nothing, as to quickly squeeze out the large scale operators, who by that run at a loss. After having squeezed them out, they can increase their bid again, as to aim at winning both cheapest price and fastest response.

A possible prevention of this would be to set reward R to be Avg(cheapest price, fastest responder price). However, any player knowing that they are the fastest responder, can then set their price unreasonably high, as to dramatically rise the reward.

Second price auction could also prevent the squeeze out, since there is a higher chance that the second price is high enough for the large scale operators to still gain. This would make any further price dumping beyond just below second price, meaningless for a home operator. Additionally, there would be no way for the fastest responder to artificially bump the reward by bidding a lot higher than the others.

Another prevention strategy would be to set the reward to the median of the entire section. The lowest bidder still wins their higher share, as well as the fastest responder. But the share comes from the median price of the section. This way, there is little room for individual operators to influence the reward by setting absurdly high prices. In the same way, the opposite - dumping the rewards by setting absurdly low prices - is also mitigated, assuming that a majority is distributed around a fairly reasonable price.

A desired property of the vault pricing system, is that it is as simple as possible, not requiring action from the average user, and not allowing for being gamed.

Cartels

Is it possible that large groups of operators would form, that coordinate their price bids as to manipulate the market? Can it be prevented somehow?

14 Likes