Exploration of a live network economy

Well, I guess that’s true. One other event then, to get GETs relative to that event. They should preferably not be correlated. But small correlation would still work.

EDIT: No, I take that back :smile: just woke up and did not have coffee yet. We’re not controlling the GETs here. So if GETs ramp up, it is still the change in what ever value we are monitoring that we look for. So the unit of t is 1 GET.
The value we monitor is X. So dx/dt is the derivative, and it will be displayed at maximum granularity of ‘time’. (Unless we only sample every n GETs, then we pass in n as time passed)

Do you have any idea specifically what would be monitored and what the PID output would be fed into?

1 Like

Yes, relative to the tick tock of parsec consensus events.

But we are. The rate of Gets and Puts are a measure of network popularity/use by clients. The network needs to grow to survive. Get rewards and put costs are the incentives it provides to users to achieve maximum growth.
If we come up with some exponential function as a targeted network growth rate, then a classical pid controller would manipulate the pricing and reward rates in real time to minimize the error between the current network growth rate and the targeted growth rate.

I may have gone too far with the pid controller analogy. There are many ways the pricing/reward algos can be done to try and achieve optimal performance given multiple objectives. I just thought that good place to start drawing inspiration was the classic textbook pid controller.

2 Likes

Not having read too much here (sorry very busy lately) - only saw something about pid and missing parts - just wanting to throw in that if the thing you want to control has integrating properties you don’t necessarily need an I part for eliminating remaining deviations and the other question would be ‘what harm does it do to have a remaining deviation’ (and just getting a non oscillating p controller (or pure I) should be way easier than with including all 3 parts … We’re talking about a crazily complex system here…)

3 Likes

Okay, well that would be one thing we could work with. But GETs cannot be SetPoint or Process variable, because we can’t control them, only weakly indirectly through PUTs (which would supposedly be leading to new content that attracts GETs).

What would encourage or discourage GETs? Hard to control that. What is even a target value for GETs?
PUTs is easier to control, by increasing and decreasing C.

But growth would be measured in client and vault count I think, as well as provided storage size and used storage size.

Node count we can control on one end (upper end). Storage size as well. Not on the lower end.
It seems to me that it is very hard to have a SetPoint in form of a growth rate or size of the network.
One thing I have discovered is that to achieve balance of for example storage used several things must balance:

  • PUT rate (in terms of 1Mb chunks for simplicity)
  • New nodes rate
  • Storage size per node
    The two latter simplifies into:
  • Storage added / removed rate

If these do not eventually match over time, the storage used will not settle at 50 %, and C and R will be affected accordingly.

In my simulations the hard part is to reach that balance. I have to model a behaviour that adapts to price changes and supplies the right amount of new storage to match the PUT rate, and vice versa - in a way that seems realistic, like a market. In reality we can only trust the market to do this for us. But if the market is irratic, or dysfunctional, the system will spiral off. And it could give self reinforced effects that breaks it. That, I see as something that must be avoided. The system must be resilient to it. If it is even possible, I don’t know that yet.

I think it was a great idea to get inspiration or variation in the thinking around this problem. I agree that it might not be optimal or useful here, because I don’t yet see how it would be used.

Am a bit distracted here so a bit short / incomplete.

3 Likes

PtP rewards and PtC rewards. Apologies, I started thinking off topic and out of scope. Lately I see farming rewards (PtF), producer rewards (PtP), and consumer/client rewards (PtC) as a package deal for optimal network growth…

True we cannot control the get rate directly, but rather indirectly and only weakly, but it is a process input. True that things like vault count and free space are the prinary variables, but If trend emerges where get rates reduce then there is a problem that the network will need to adapt to. Maybe this is more of an edge case, but the scenario should not be overlooked.

One simple way handle reduced gets is to implement the recently proposed ‘Pay on PUT’ reward method. Pay on put and pay on get can work together to keep the farmers happy at both edge cases (99:1 put/get ratio) and (1:99 put/get ratio) It reduces the need for the network to dip into its savings account to handle imbalances.

2 Likes

Exploration of a live network economy

(Edits made to update for an error in the data)

Next iteration

Each iteration of models will be given a sequential number, and we will simply call them Version 1 .. n. The first proposal was Version 1.

Version 1

Results from v1 simulations are available online in this excel.

After days of exploring the area around where the needle was dropped on the map, I have gathered some insights about that specific solution, which makes me believe that there is some fundamental problem with it. So by that, it is time move on to some other location on the map.

Insights

It was interesting to try out what effect on the system these interrelations would give. Digging down into this model, and working with implementing the simulation, gave a better understanding of the various parts. The untried combinations of variables, and tuning of values are still vast, however, as initially suspected, one of the problems was that there were so many variables included, which led to difficulties in tuning and - as it seems - difficulties in reaching stability of the system.
For that reason, moving on to another version, we want to try something more light weight.
I do not rule out the usefulness of some iteration of Version 1, however I feel I want to continue the exploration, and perhaps move back in this direction at some other point in time.

Version 2

(Results from v2 simulations can be found online in this excel.)

Browsing earlier suggestions, I decided to implement a suggestion for StoreCost from way back in 2015.
For FarmingReward, I did a modification of RFC0012. The reason I did not take RFC0012 directly, is that I have not been able to reproduce a random success rate of modulo of a random hash and FarmingDivisor, that both correspond to the RFC and gives a result that does not blow up. I can discuss this topic further if someone is interested. But basically, since coins are now divisible, all I needed was the sought success rate, by which I could just multiply with that number, as to get the equivalent of the probabilistic reward.

So, let’s look at the modified R.

First, a recap of variables:
s = Sections count
f = Filled storage percent (formerly d)
u = Unfarmed coins percent
R = Farming reward
C = Store cost
b = nanosafes per coin

FarmingReward R

The reward is composed as follows:

R = (u * f * b) / s

And the code:

public override Coins FarmingReward(Network network)
{
    return network.UnfarmedCoins * network.PercentFilled * (1 / (decimal)network.Sections.Count) * Coins.One;
}

Farming reward in RFC0012 includes f in calculation of FarmingRate, and then does a probabilistic execution based on a random hash and the inverted FarmingRate (i.e. FarmingDivisor), and then another probabilistic execution by checking if the coin exists or not, so basically based on u.

The difference now, is that we have replaced the first probabilistic part, with a success rate that is proportional to how much storage is used. They are not equivalent, but RFC0012 does base this part on the same variable. However, due to not being able to properly estimate the statistical outcome of that solution, this change was introduced instead.
The second probabilistic part is equivalent, and it is giving a success rate based on percent of unfarmed coins, which over time should yield about the same results as trying to create a coin at an adress not already existing.
In addition to these, the element of network size has been included, as to give gradual decrease of R as network grows. Simply, the reward is divided by number of sections. The motivation for this is the same as in Version 1.

StoreCost

This is an exact implementation of a suggestion made by @Seneca in 2015: Early demand, and effect on storage costs, after launch

The following is an excerpt:

  1. Close groups must track the total amount of safecoin issued as rewards, and must track the total amount of safecoins absorbed from PUTs
  2. Based on the amount of issued safecoinsand the total amount of safecoinsin circulation, a target figure of total amount of safecoinsabsorbed can be computed
  3. On every PUT, if the actual amount of safecoinsabsorbed is lower than the target figure, the PUT cost is increased, else if the actual amount of safecoinsabsorbed is higher than the target figure, PUT price is decreased.

Let’s put it together. Step 1 and 2:

I = total SafeCoins issued
A = total SafeCoins absorbed
TA = target total SafeCoins absorbed
S = supply of SafeCoin (0.0-1.0)

TA = I * S

S makes sure that the rate of increase in SafeCoin supply (i.e. inflation) tapers off as we approach the cap. At the cap, S == 1.0 , so then the target total SafeCoins absorbed is exactly equal to total SafeCoins issued . Since there may be times when the farming rate algorithm suddenly has to increase rewards, we probably want to keep a buffer of reserve SafeCoins for such times. If we want to keep 10% of SafeCoins in reserve, the formula becomes:

TA = I * (S + 0.1)

Step 3:

MB/SC = Megabytes bought per SafeCoin

if (TA > A) {            //fewer SafeCoins have been absorbed than we want
    MB/SC--;             //So increase the PUT price to start absorbing more of them
} else if (TA < A) {     //More SafeCoins have been absorbed than we want
    MB/SC++;             //So decrease the PUT price to start absorbing less of them
}

And the code implementation for this is:

public override Coins StoreCost(Network network)
{
        var targetTotalSafecoinRecycled = (network.CoinsSupply + 0.1m) * network.TotalPaid;
        if (targetTotalSafecoinRecycled > network.TotalPaid)
                --_chunksPerSafecoin;
        else if (network.TotalPaid > targetTotalSafecoinRecycled)
                ++_chunksPerSafecoin;
        return new Coins(Coins.One.Value / _chunksPerSafecoin);
}

where _chunksPerSafecoin is initialised to 11 134 nanosafes per MB (based on preliminary voting results in Polls: How much will you spend? How much storage do you need? etc)

You might spot the constant 0.1 on the first line in the code block, this is telling that we are aiming for 10 % buffer of unfarmed coins. It is from the proposal, but we will also try this with the previously stated goal of 50 % unfarmed kept as buffer.

Here is rest of the post:

A great benefit of this approach is that we actually have control over inflation now. Unlike in BTC where the inflation rate is a function of time (block count), with this algorithm the inflation rate is a function of usage of network resources. More usage (growth of the network) increases the inflation rate, less usage decreases the inflation rate.

Since we start with 30%(?) of SafeCoins already in existence, I should be initialized at 0.3 * 2^32, and A should probably be initialized so that TA == A where S = 0.3 .

MB/SC can be initialized at a guesstimate number, the algorithm would quickly correct it to the right value.

Method

Initial values

InitialUsers: 5000
TotalSupply: 4294967296:0
InitialSupply: 644245094:367787776
InitialUserCoins: 128849:18873557
Unfarmed: 3650722201:632215000
InitialUserChunks: 100000
InitialUsersPerVault: 3
ReadWriteRatio: ReadWriteRatioNo3
UsersPerVaultRatio: UsersPerVaultRatioNo3
ActionsPerUserDay: 100
GrowthRate: DemandBasedNo6
FarmingAlgo: RFC12Seneca
CachedFarmingAlgo: True
VaultSize: 500000
DaysSimulated: 3650

Simulation code

    public void Start()
    {
        var days = Parameters.DaysSimulated;
        var actionsPerUserDay = Parameters.ActionsPerUserDay;
        var growthRate = Parameters.GrowthRate;

        var sw = new Stopwatch();

        Report(-1, 0);

        for (int i = 0; i < days; i++)
        {
            sw.Restart();

            var nodeCount = _network.TotalNodeCount;
            var newVaults = (int)(nodeCount * growthRate.GetRateFor(i, _network));
            for (int j = 0; j < newVaults; j++)
                _network.AddVault();

            var totalVaults = nodeCount + newVaults;
            var usersPerVault = Parameters.UsersPerVaultRatio.GetRatioFor(i, _network);
            var totalUsers = (int)(usersPerVault * totalVaults);

            Parallel.For(0, totalUsers, s => Action(i, actionsPerUserDay));

            sw.Stop();
            
            Report(i, sw.ElapsedMilliseconds);

            TryResetCache();
        }

        Output();
    }

and

    void Action(int day, long actionsPerUserDay)
    {
        var ratio = Parameters.ReadWriteRatio.GetRatioFor(day, _network);
        if (ratio > StaticRandom.NextDouble())
            _network.Get(actionsPerUserDay);
        else _network.Put(actionsPerUserDay);
    }

Market model

Growth rate

public class DemandBasedNo6 : GrowthRate
{
    readonly UsersPerVaultRatioNo3 _usersPerVaultRatio = new UsersPerVaultRatioNo3(3);
    const int year = 365;

    public override double GetRateFor(int day, Network network)
    {
        var d = (double)network.PercentFilled;
        var f = Math.Pow(d + 1, 2) - 1;
        var disallowMultiplier = d * f;
        return disallowMultiplier * GrowthRateMultiplier(day, network) * DailyRate(day);
    }

    double GrowthRateMultiplier(int day, Network network)
    {
        var c = network.StoreCost();
        var r = network.FarmingReward();
        var t = (double)(c / r);
        var m = 1 - t;
        var u = _usersPerVaultRatio.GetRatioFor(day, network);
        return day >= 365 ? m * u : m * (365 - day);
    }

    double DailyRate(int day) => YearlyRate(day) / year;
    double YearlyRate(int day)
    {
        if (365 > day) return 0.20;
        else if (730 > day) return 0.16;
        else return 0.12;
    }
}

Users per vault

class UsersPerVaultRatioNo3 : UsersPerVaultRatio
{
    readonly InitialRatioTimeChangeNo1 _ratioTimeChange;
    readonly ReadWriteRatioNo3 _demand = new ReadWriteRatioNo3();

    public UsersPerVaultRatioNo3(double initialRatio)
        => _ratioTimeChange = new InitialRatioTimeChangeNo1(initialRatio);

    public override double GetRatioFor(int day, Network network)
    {
        var c = network.StoreCost();
        var r = network.FarmingReward();
        var demandWeight = (double)(r / c);
        var g = GrowthRateMultiplier(day, network);
        return demandWeight * g * _ratioTimeChange.GetRatio(day);
    }

    double GrowthRateMultiplier(int day, Network network)
        => 1 - _demand.GetRatioFor(day, network);
}

class InitialRatioTimeChangeNo1
{
    readonly double _initialRatio;

    public InitialRatioTimeChangeNo1(double initialRatio)
        => _initialRatio = initialRatio;

    public double GetRatio(int day)
    {
        if (180 >= day) return _initialRatio;
        else if (day > 180 && 365 > day) return 2.2 * _initialRatio;
        else return 3.3 * _initialRatio;
    }
}

Read-write ratio

class ReadWriteRatioNo3 : ReadWriteRatio
{
    public override double GetRatioFor(int day, Network network)
    {
        var c = network.StoreCost();
        var r = network.FarmingReward();
        var w = (double)(c / r);
        var t = w * Math.Pow(1 + w, 2);
        return Sigmoid(t);
    }

    double Sigmoid(double x) => 1 / (1 + Math.Pow(Math.E, -x));
}

Results

Data points can be found online in this excel.

Discussion

Storage

Filled storage percent f rose up to near 45 % quite sharply and then stayed roughly there all through the 10 years simulated. A slight decline was seen from that point, which is not in accordance with the target of balancing around 50 %, so this would indicate that the models need further work.

Market

Even though models were used for GrowthRate, UsersPerVault and ReadWriteRatio (that had been fine tuned in Version 1 as to model something resembling a realistic market, responding to changes in price of storage as well as reward) it was surprising to see that the system was very stable already in the very first simulation, with no tweaking done. It seems likely this can be attributed to the previous work with fine tuning models.

Growth rate

A growth rate of 12 % per year, after passing the very early stages of the network (2 years), seems like a reasonable rate. This would be roughly the growth in internet users seen between 2005 and 2007.
Later stages of the network (>10 years) would probably much like internet adoption, see additional decline in yearly growth rate.

Clients

The number of clients so sharply rising, and then quite steadily falling, is a result of initially very cheap store cost - as determined by the ratio of C to R (this is a model of how cheap C is) and a gradual evolution of the network with lots of uploads, in combination with a modeled decline in users per vault ratio.
One could argue that the decline in users per vault is not realistic. The idea has been that it is due to an increased adoption of the practice to run a vault. It would seem that there would instead be a transition of that initial mass of clients, into vaults, which is not what we see. Instead these clients are in the later stages of the simulation no longer users of the network. A maybe far fetched after-construction, could be that these are users that took advantage of the very cheap storage costs to upload a lot of data, but that later are not using the services of the network to any larger extent. Rather, they just keep their backups, for some undefined point in the future.

Store cost

This number is quite steady, and all the time significantly below the farming reward. If we assume the farming reward to be baseline indicator of safecoin fiat value (meaning that as it decreases, it indicates that the fiat value increases) we can conclude that Version 2 - as intended - gives an initially very cheap store cost, and sees a gradual increase towards the real market value.

Farming reward

An initial spike up to 6 million nanosafes per GET (0.006 safecoin) at day 7 after launch, is followed by a sharp drop to 218.000 nanosafes after about 323 days. A 96 % drop in less than a year. We then see a surge, which is the result of the market model, after which a slower and steady decline is seen till the end of the simulation.

Unfarmed coins

Also in this model, it seems it will take many many years before we get close to 50 %. This simulation used 10 % as a buffer target, which would take even longer to reach.

Vaults

It would seem like 810.000 vaults in 10 years is a bit of a pessimistic estimation of growth.
Previous simulations have gone up to a maximum of 24 million vaults (and 1.2 billion clients) in only 2 years. Simulations take much longer with that size, and reaching 10 years with such a large population would probably take weeks, and maybe even months or more with a continued growth. It is hard to say what is a realistic adoption rate. But it seems fair to believe the number is somewhere between these two values.

Further work

Improvements on the model of user behavior and the market, are definitely needed. These are very primitive models. Preferably models based on various observations, data sources and perhaps existing work in a similar domain. It would be desirable to try various levels of irrationality and dysfunction of the market, as to determine the resilience of the economy model.

9 Likes

So after ten years (the current lifetime of bitcoin) it looks like there’s still around 85% unfarmed coins… so only 5% are farmed in 10 years (10% ICO + 5% farmed)… Am I reading the chart correctly? Seems like a very low reward rate.

Would be good to see a chart with total rewards issued as well as unfarmed coins remaining. This would give some visual indication of the recycling concept.

Interesting investigations thanks for posting!

5 Likes

Might velocity of tokens be useful somehow?

From that you get this equation for the price of a coin

C=TH/M

C = price
T = transaction volume
H = average holding time per coin before making a transaction 
M = total number of coins

velocity of the coin is inversely proportional to the value of the token i.e the longer people hold the token for, the higher the price of each token. This is intuitive, because if the transactional activity of an economy is $100 billion (for the year) and coins circulate 10 times each over the course of the year, then the collective value of the coins is $10 billion. If they circulate 100 times, then the collective coins are worth $1 billion.

2 Likes

Actually, there is a simple mistake here :confused: , that’s why we see 93.333 % as start. It should say slightly more than 85 % at start. I didn’t check that number as I thought it was due to initial uploads, but far from it :frowning_face:.
I fixed it and ran again, and placed in a new sheet in the excel. The previous is kept (called Faulty) for reference.
OK, so that’s good, because these results are better, I was a little bit confused and concerned about that ratio. There was not much changed in the results, other than the unfarmed percent being down to 83.37 % after 10 years now.

(I updated the post above with the correct data, as to minimize confusion for readers. Also including the requested data on rewards issued.).

Here are the updated graphs, including data about total paid and total farmed, as well as a chart for a single vault (of 500GB) revenue per day for the first 30 days. (Complete data is found in the excel).

Vaults%20%26%20Clients

So, It starts out with 15 % from ICO (I don’t have the details but it seems there were some additional 5 % issued). So, we start at 85 % unfarmed. Then the 5k users upload their 100Gb worth of data, at a starting price of almost 0.09 safecoin per GB (which is due to _chunksPerSafeCoin initialised to 11134 nanos, which is a value I exctracted from the poll here on the forum, on what users are prepared to pay per GB for their first data on the network).

A vault that joins on day one, will earn about 2416 safecoins / TB over 10 years (of which 2/3 is earned the first 30 days).

I also think that the reward rate seems a bit low, mostly because we only net farm 1.63 % of all safecoins during first 10 years (with a very low population though).
I don’t know what would be a desirable rate, but since the rate of net farming would supposedly decline as we get closer to 50 %, it seems we can allow for a bit speedier initial issuance. While that might affect inflation, it also would motivate farmers to join I think?
If we consider the C / R ratio an indicator of how cheap storage is, it would perhaps also indicate that storage is even cheaper initially, which would presumably also attract more data uploads. (Not entirely sure about that assumption, i.e. the C / R ratio as indicator of how cheap storage is, as we also increased inflation, but that is how the market is modeled here anyway).

So, I’m just going to guesstimate a desired net issuance of 10-20 % in 10 years? The next 10-20 % might take 20 years.

Another thing is that I wonder if we really need 50 % unfarmed considering the scarcity and the declining farming rewards. 10 % might be plenty good buffer for the network (as per the proposal by Seneca) - or maybe 20 %. But 50 % seems too much I think.

It might. I was actually going to try that idea out later, as it was also mentioned by @digipl already back in 2015: Safecoin VS SAFE Storage, and again in the same topic as Senecas store cost idea Early demand, and effect on storage costs, after launch making the point that the two ideas are actually similar.

I’m not sure though if we will be able to implement it that way. This part … :
H = average holding time per coin before making a transaction
… seems like it could be a bit tricky to track.

5 Likes

Version 2, iteration 2-5

s = Sections count
f = Filled storage percent ( formerly d )
u = Unfarmed coins percent
R = Farming reward
C = Store cost
b = nanosafes per coin

Increasing Farming reward

In the first iteration (disregarding the one with faulty data), we saw a low net farmed after 10 years, only 1.63 %.
It was noted that the farming rate seemed too low.

An approximation of desired net farmed was set to 10-20 %.
Calculation of R was then changed slightly, as to achieve this

From iteration 1 (i1) we had:

R = (u * f * b) / s;

As to increase R we then make the divisor smaller, but keep s, so in iteration 5 (i5) we make it a function of s:

q = ln(s)^3.5
R = (u * f * b) / q;

in code:

var divisor = (decimal)Math.Pow(Math.Log(network.Sections.Count), 3.5);
return network.UnfarmedCoins * network.PercentFilled * Coins.One * (1 / divisor) ;

Since the market model is designed such, that higher R to C ratio, leads to increase in vault population, these simulations take a longer time to complete.

Discarded iterations

Iterations 2-4 contained optimizations that did not work out well.
The optimizations consisted of batching all users’ reads and writes of a day, without calculating the costs after each individual user. (It had previously been optimized in that the actions of a single user were batched per day, without calculating the cost for each single action.)
The results of this however turned out to be an initial store cost of 89k nanosafes instead of 61k. Some, but not all curves looked similar afterwards. (These sheets are kept for reference in the doc, name appended with (optm)).
No attempts to solve this has been done yet, but is desired, since time to simulate 10 years increases from about 30 minutes (less than 1M vaults) to a few hours when we see populations of several million vaults.

Results

(Results from v2 simulations can be found online in this excel.)

Iteration 5

500GB%20Vault%20revenue%20per%20day

Comparisons of iteration 1 and 5

Accrued%20safecoins%20per%20500%20GB%20vault Clients

Discussion

Accrued safecoins per vault is lower in i5 (61.6 %), although total farmed is much higher (7 times). This is due to the much higher number of vaults.
Total vault count is 6.87M in i5, which is almost 8.5 times higher than i1, while client count is 18.5 times higher. While i1 had a pessimistic outcome of usage, i5 usage is perhaps best described as conservative.
Storage percent filled is slightly higher than (i1), at 49 % instead of near 48 %. The curve is more or less identical otherwise, still having a steady but small decline after reaching about 50 %.
Interestingly, store cost is almost identical between i1 and i5.

Main goal of this iteration, was to speed up net farming, and this was achieved as i5 reached 74 % - a net farming of almost 11 % of coins, compared to 1.63 % for i1.
This is not mainly a result of higher farming reward (+ 24 %), but of the market model that assumes store cost to be cheaper in fiat terms, as the ratio C / R is smaller in i5. (Edit: To clarify, the outcome is affected three-fold: the cheaper C is set to trigger a higher client on-boarding, it is also set to decrease read-write ratio of clients, and ultimately it is set to increase the growth rate of vaults, as to match the increase in clients with increased write ratio.)

What we have seen is with other words, that it is the increased user base that gives the higher net farming. That the user base is larger in i5 than in i1, is merely a result of this specific market model design. We would see similar increase in net farmed, if we increased users directly for example (instead of indirectly via slightly increasing farming reward).

8 Likes

Could you please compare your model with some existing deal for lifetime storage as:
today 1TB $34
5years later 1TB $13 (expected)
10years later 1TB $5 (expected)
(price drop of 1TB in a past about 2,6x per 5 years)
You can set price of SafeCoin for day one to match with this deal and check what price of SafeCoin should be to match with expected price offer in 5 and 10 years.

Edit: the price of offer does not matter, it is only sample. Other not discount prices are about 10 times more expensive.

1 Like

Hi @Mendrit, sorry about the late reply.

I’m not sure I understand exactly what you’d like to see. Do you think you could perhaps rephrase it a bit?

One possible optimization (simplification) of the model is to try exclude data sizes.
If we have an average of x amount of storage added per time unit, and an average of y amount of data traffic per user and time unit - then we can simplify the problem by assuming that, as data storage capacity grows (i.e. storage becomes cheaper in $), traffic also grows. I haven’t looked up the proportions (it’s probably not many google-searches away), but I think that’s what we’ve seen so far. Increased storage capacity gives increased traffic, and so we don’t need to deal in absolute numbers (if we’re simplifying a lot) - just use a base unit and assume that any growth is reflected equally in capacity as well as traffic.

Potential problems here are that we are pegged to absolute data size by the PUT definition being the cost of storing data of size less than or equal to 1MB, and another one is bandwidth, since it is growing in capacity / price at a rate distinct from storage.

I actually haven’t delved into these aspects yet, so a lot of unfinished thinking here.

Additional iterations

First, some nomenclature:

What is being explored here is more than just the farming algorithm, or safecoin, it is the entire SAFENetwork economy, as it would be with a single farmable resource (storage).

Then a big fat disclaimer:

These simulations basically says that given all the chosen parameters (and no others), assumptions and simplifications, this is the outcome. So far, we’re not really talking about real outcomes. This is definitely just feeling out the domain, trying out various things and see what gives.

The most difficult thing in all of this, is to create something reminiscent of a realistic market model.
The SAFENetwork economy cannot be tested in isolation, it depends entirely on the reality of the real life market around it.

So what we’ve done here so far, is basically to just imagine a very specific, and absurdly simplified market behavior, and see how the farming algorithm would fare with that particular behavior.
That is not entirely pointless, since it does generate some data that will inform us to some extent on the viability of the tested algorithms. One would however have to be utterly aware of the tiny spectrum revealed by it, as to not make oneself a disservice by reading in too much in that data. The informational value is low, and it is tightly inter-tangled with noise and misleading information.

Store cost

Before delving deeper into improved market modeling, a few additional iterations were done. Among them a simulation of 100 years, just for the sake of observing some extremes.
An important observation was that the store cost of version 2, an implementation of Senecas proposal from back in 2015, exhibited the same issue as that of version 1 (before it had compensating measures implemented).

We assume that we, in a state of balance of store cost and farming reward, have a 98:2 ratio of reads to writes, as per the social media distribution of consumers vs contributors. (The actual number is probably different, but a distinct preponderance of reads is likely, which is the important part for the principle.) This gives that increasing store cost as to balance unfarmed coins supply, is a blunt instrument, as it would be necessary to have a store cost much higher than farming reward, to compensate for the issuance done by the reads. Assuming that farming rewards is at a balance with market valuation, and store cost is initially discounted to encourage uploads and enforce a net farming, allowing store cost to grow much higher than farming reward, risks thwarting uploads all together, which would be completely counter productive to the goal of increasing recycling rate as to balance the supply of unfarmed coins. The net farming would instead increase, adding even more to the problem, and the system would spiral out of balance - perhaps into an unrecoverable state.
It was also observed that the adjustment seemed likely to overshoot and result in a strong oscillation of store cost around the desired value. It’s possible this would be a place where to implement adjustment using the integral and derivative of the error - as per PID controller method.

Improving market model

As mentioned previously it would be desired to look at previous work done in a similar domain. I did look up some papers in simulation of stock markets with interesting methods that can be put to use here. Now, currently trying to apply those onto a cryptocurrency market would not be a good fit. However, if we are looking at a global adoption as hoped for, with tens or hundreds of millions of agents or more, then the market behavior will change. It would probably look more like stock market than cryptocurrency at that point.

To increase the informational value of simulations done with the SAFENetwork economy models, it seems we must put a lot more effort into the market model.


I will be spending time on some other things related to SAFENetwork for a while now, so will pause this exploration here. Any ideas for methods to research, papers to read and what not, please suggest it here and we can try work with it when I’m done with the other stuff.

6 Likes

The real price check is only one point checking if SafeCoin price will increse or not in specific model. And with diferrent imputs it should show how to set formula to reflect SafeNetwork needs. (They are already specified in original WhitePapers).

We can predict that price of HW will drop, or bandwidth price so the real price of 1 chunk should follow the price movement in oposite way if there is still about same demand. If not SafeNetwork would become expensive option to use.

So if StoreCost drop from 60000nanosafes to 50000nanosafes in ten years it will be only 17% cheaper (in SafeCoins) than 10 years ago ? And if $ price of SafeCoin grows more than 15%, it will be actually more expensive?

1 Like

Something that might be interesting to model (or might simply be irrelevant) is the ‘bitcoin replacement’ mode, where safe network is seen mostly as a token economy and not as a data service.

Maybe this idea can be expressed as “client activity is 1% of PUT and GET, and farmers do 99% of all PUT and all GET”. So farmers have some usd cost to do a GET (mostly some tiny bandwidth cost, some tiny cpu cost for signing, but multiplied billions of times) and they have some safecoin cost to do a PUT (storecost), and they have have some usd cost to respond to these GETs (storage, bandwidth, consensus work etc). When there’s some imbalance farmers can arbitrage by doing more/less GET or more/less PUT or more/less selling/buying for usd or safecoin. It’s a direct replacement of bitcoinmining=repeat(hash) with safecoinfarming=repeat(bandwidth, storage, cpu). And then somewhere in the swirling mess of farming activity there’s the occasional client upload or download, just like how in the swirling mess of bitcoin hashing there’s some client transactions.

It’s a really hard model to make, probably very sensitive to initial state and cost assumptions, but I find it interesting from a marketing / communications perspective since it really hammers home how proof of resource replaces proof of work (and presumably the lower costs per unit of client value).

Too crazy perhaps?! Has a lot of parallels with the topic ‘gaming the rewards’, but at some point the line between spam and not spam becomes very fuzzy. I think a lot of farmers will push hard on that line and things could get weird.

5 Likes

The other day, just as I was about to pause this exploration for a while, I got some inspiration by something that @19eddyjohn75 wrote here.

Two things actually - one of them a big change to farming reward.
It’s funny, the other of them is also touched on by @mav there now:

When thinking about the bigger idea that the post spurred (about dramatically changing the way farming reward works), one thing I realized is that read-write ratio basically says how many GETs there are per chunk uploaded. At next instance it might be something else, but in a large network it should not fluctuate heavily. So at that moment, 1 PUT should cost [read-ratio] x farming reward, as to cover all GETs prognosticated for it.

For example, if there are 98 % reads, the store cost should be 50x farming reward (well, the ratio is 98:2, i.e. 49:1 and that means 49x R infused in every C.).
This I currently think is the most natural method for knowing what store cost should be.

It can be weighted with inverse proportionality to unfarmed coins as well, as to enforce a gradual flattening of the decline in unfarmed supply curve, approaching a zero derivative. (I think now btw, that balance of unfarmed should be closer to 10 % than 50 %). This I have done in simulations just now.

So, to the main thing that @19eddyjohn75 's post spurred in my thinking:

It seems to me that algorithmically determining farming reward, based on parameters available within the network (where storage scarcity would seemingly be the most important factor in all proposals), cannot follow the real fiat value of safecoin as agilely as we are used that electronically interconnected markets do. We have an inertia in the form of joining and leaving of nodes, which additionally is a dampening and fuzzying indirection in the price discovery, as it tries to express all its value through its value in terms of storage.

Just as I was about to pause the economy simulations, I got an idea for doing this radically different. I’m still working on the details, but so far I’ve done this little write up. It’s just started and I had planned to write a lot more (and refine unfinished thinking) before posting, but I’m about to do some other stuff now, so best to just put it out there so others can start think about it as well :slight_smile:

I’ll start with a nice chart, from the latest simulation of 53 years. It took more than 24 hours to finish.
It employs a model for vault operators bidding on farming reward price, with some weight added for storage and coin scarcity. Store cost is calculated based on the read-write ratio, as per the above description, additionally weighted by coin scarcity.


End size of network: 7million vaults and 50+million clients.


A new take on farming rewards

Economy aims

When designing an economy, we need to define desired properties of it.
In the work with the economy models, we have so far discerned these desired properties:

  • Supply of storage should allow for a sudden large drop of vault count, and thus a margin of about 50 % is desired.
  • Supply of unfarmed coins should allow for the network to adjust costs and payouts, and thus a margin of about 10-20 % is desired.
  • The balance of storage supply should be reached as soon as possible.
  • The balance of the unfarmed coins supply, should be reached in a timely manner, but not too fast. Not sooner than 5 years, and no longer than 20 years.
  • Store cost should reflect the value of the storage.
  • Farming reward should reflect the value of serving access to data.
  • The economy should be able to incentivise users to provide more storage when needed.
  • The economy should be able to incentivise users to upload data when there’s plenty of storage available.
  • The economy should be able to incentivise rapid growth as to secure the network
  • The economy should be able to allow users to quickly act upon the incentives, thus swiftly reaching the desired outcome.
  • The economy should be as simple as possible, and not require any special knowledge by users, for normal usage.
  • The economy should not be easily gameable.

Vault pricing

The most important part is not to bring the large scale operators out of the game, the most important part is to keep the small scale operators in the game.

Because we still need the large scale operators, or at least it has not been shown that they are not needed, and so we cannot assume they are not.

Large scale operators might be able to provide the network with more bandwith and speed, and they should be rewarded for stabilising the network with those resources.
However, we also want to emphasize the incentives given to decentralisation, and that is done by allowing

  • Vaults to set the price of reward
  • Equal share of payment to the lowest price offer as well as fastest responder.

An additional benefit of this, is that we have internalised and reclaimed the market valuation of storage. It is now done directly by each and every individual vault.
The problem of how to scale safecoin reward in relation to its market valuation, has by this been overcome. There is no need for the network to have a predefined price algorithm that both take into account the potentially very large ranges of fiat valuation of safecoin, as well as the inertia in allowing new vaults in.

Price adaptability

A coin scarcity component will influence store cost, as to give a discount when there is still a lot of unfarmed coins. Gradually, as the portion of unfarmed decreases, the store cost will increase and first approach farming reward, and eventually surpass it. Previously, it was thought that due to expectancy of read write ratio being very high, it would not likely be the store cost itself that prevented depletion of network coins. Instead, when the idea emerged of the relation of reads to writes being significant to store cost, it was thought that the market forces would be doing this as scarcity grows and fiat valuation of safecoin grows. This would allow vault operators to lower the safecoin price while still running at a profit. The effect of this is that farming rewards would be smaller and smaller in safecoin terms, as the scarcity and valuation increases, and supposedly the unfarmed supply would then be farmed in smaller and smaller chunks, thus never completely running out.
(However, it is possible to add the coin scarcity component also to R, as to reward more when much is available, and less when when less is available.)

Reads to writes, the key to actual store cost

The proportion of reads to writes, is essentially the number of times any given data will be accessed. If read-write ratio is n:1, it means that every uploaded chunk is accessed in average n times.
For that reason, if every access is paid with R from the network, every piece of stored data should have the price n * R, where n is the number of writes per read at the time of upload.

A read write ratio, basically says how many times any given piece of data is expected to be accessed during its lifetime – as of current situation. It is then natural, that for store cost C to be properly set relative to farming reward R, it must set to the expected number of accesses for a piece of data, times the reward for the access.
By doing this, we enable balancing the supply of unfarmed coins, around some value. This is possible because when we weight the store cost according to read-write ratio, we ensure that payment from, and recycling to the network, happens at the same rate. All that is needed is to keep an approximately correct value of reads and writes done. In a section, it is perfectly possible to total all GET and PUT requests, as to have the read-write ratio of the specific section. When some metrics is shared in the BLS key update messages, we can even get an average from our neighbors, and by that we are very close to a network wide value of read write ratio.

Where this balance ends up, is a result of the specific implementation. It can be tweaked as to roughly sit around some desired value, such as 10 % or 50 %.

Store cost

[Coming up]

Calculating R

G = group size = 8

R

  • 20 % to fastest responder
  • 20 % to lowest price
  • 60 % divided among the rest (6 out of the 8), according to some algo

Setting price of GET:

1: p = Lowest price among the vaults in the group.
2. a = Median of all neighbour sections price (which are received in neighbor key update messages).
3. f = Percent filled
4. u = unfarmed coins
Like so:

R = 2 * u * f * Avg(p, a)

Tiebreaker among multiple vaults with same lowest price:

  • Reward the fastest responder among them.

Example:

Section has 145 vaults.
Fastest vault has price of 200k nanosafes per GET.
Cheapest 3 vaults has price of 135k nanosafes per GET.

At GET, the price is set to 135k nanosafes.

  • 0.2 * 135 = 27k goes to the fastest vault.
  • 0.2 * 135 = 27k goes to the fastest of the 3 cheapest vaults.
  • 0.6 * 135 = 81k is divided among the rest in the group according to …. algo.
    If split even, that means the remaining 6 (out of 8) gets 81k / 6 = 13.5k nanosafes each.

Data is uploaded to the section.
Last GET was rewarded at R=135k nanosafes.
Store cost C is then a proportion of R, determined by coin scarcity in the section.
If unfarmed coins u is 70 %, then cost multiplier is:

m = 2 * (1 – u)^2

and store cost is:

C = m * R

New vaults

A new vault joining a section, will automatically set its R to the median R of the section.
Using the lowest bid is not good, because then you immediately remove the farming advantage of price pressuring vaults, which would mean that they don’t profit from lowering their price, as they immediately and constantly get competition from new vaults joining, and result is probably that they just get lower reward than before, and for that reason they have nothing to win on pressuring the price downwards. So, best would be to let new vaults default to the median, as to get them in at an OK opportunity for rewards, but not pulling the rug from under the price pressuring vaults. This way, the incentive to lower the price is kept, as they are by that more likely to receive the bigger part of the reward. Additionally, new vaults will also have an OK chance of being the cheapest vault for some of the data it holds, without influencing the price in any way by mere joining. They simply adapt to the current pricing in the section. Any price movers among the vaults, would influence by employing their price setting algorithms. This way, we don’t disincentivise members of the section to allow new vaults in – which would be the case if that statistically lowered their rewards.

This means that no action is required by new vault operators. However, advanced users can employ various strategies, anything from manual adjustment, to setting rules ( as in for example - naïvely - R = cheapest – 1), feeding external sources into some analysis and outputting into the vault input etc.

Every time a vault responds to a GET, it includes its asked price.
The price used for reward of a GET is however always the from the most recent established GET, as to not allow a single vault to stall the GET request.

Example:

(GET 0 is the first GET of a new section)

GET 0:

Vault A ; response time: 20ms, price: 43k
Vault B ; response time: 25ms, price: 65k
Vault C ; response time: 12ms, price: 34k
Vault D ; response time: 155ms, price: 17k
Vault E-H: ….
Reward: Most recent GET from parent before split (or [init reward] if this is first section in network). Say, for example 22k
Next reward: 17k
Fastest vault: C
Cheapest vault: D
R_c = 0.2 * 22 = 4.4k nanosafes
R_d = 0.2 * 22 = 4.4k nanosafes
R_ab_eh = 0.6 * 22 / 6 = 2.2k nanosafes

GET 1:

Vault A ; response time: 24ms, price: 45k
Vault B ; response time: 22ms, price: 63k
Vault C ; response time: 11ms, price: 37k
Vault D ; response time: 135ms, price: 15k
Vault E-H: ….
Reward: 17k
Next reward: 15k
Fastest vault: C
Cheapest vault: D
R_c = 0.2 * 17 = 3.4k nanosafes
R_d = 0.2 * 17 = 3.4k nanosafes
R_ab_eh = 0.6 * 17 / 6 = 1.7k nanosafes

Vault operator manual

When setting the price of GETs, the operator doesn’t really have a clear correlation between the number set, and the resulting reward.
Let’s say the operator has the lowest offer, then it will win every GET, and be rewarded with20 % of the R calculated for it (assuming it is not also the fastest responder). As R is dependent on coin and storage scarcity this could be wildly different numbers in different times. An operator offering storage for 1000 nanos per GET would receive 200 nanos if 100 % storage was filled and 100 % coins issued. If on the other hand 50 % storage filled and 50 % coins issued, the operator would receive 50 nanos. In other words, it is likely that the number entered in the settings, is quite different from the resulting reward, which makes this configuration less intuitive.

The number to be entered - to the operator – is practically just some random number.
However, as the vault joins the section, it will have guidance on what number is reasonable. The operator then only has to worry about adjusting in relation to that. Such as, set price to x % of median section price at time T. The x % could for example be the price movements on a chosen exchange since time T.

Game theory

Winner’s curse

The risk of Winner’s curse is not certain, but it could be argued that vault operators will try to outbid eachother by repeatedly lowering the price, beyond reasonable valuation, to the detriment of all.

Is there a Nash equillibrium?

The low cost home operators might have incentive to lower the price to virtually nothing, as to quickly squeeze out the large scale operators, who by that run at a loss. After having squeezed them out, they can increase their bid again, as to aim at winning both cheapest price and fastest response.

A possible prevention of this would be to set reward R to be Avg(cheapest price, fastest responder price). However, any player knowing that they are the fastest responder, can then set their price unreasonably high, as to dramatically rise the reward.

Second price auction could also prevent the squeeze out, since there is a higher chance that the second price is high enough for the large scale operators to still gain. This would make any further price dumping beyond just below second price, meaningless for a home operator. Additionally, there would be no way for the fastest responder to artificially bump the reward by bidding a lot higher than the others.

Another prevention strategy would be to set the reward to the median of the entire section. The lowest bidder still wins their higher share, as well as the fastest responder. But the share comes from the median price of the section. This way, there is little room for individual operators to influence the reward by setting absurdly high prices. In the same way, the opposite - dumping the rewards by setting absurdly low prices - is also mitigated, assuming that a majority is distributed around a fairly reasonable price.

A desired property of the vault pricing system, is that it is as simple as possible, not requiring action from the average user, and not allowing for being gamed.

Cartels

Is it possible that large groups of operators would form, that coordinate their price bids as to manipulate the market? Can it be prevented somehow?

14 Likes

A fascinating post, lots to take in since it’s quite different to prior ideas. My main takeaways / summary / highlights are:

The most important part is not to bring the large scale operators out of the game, the most important part is to keep the small scale operators in the game.

we have internalised and reclaimed the market valuation of storage. It is now done directly by each and every individual vault.

There is no need for the network to have a predefined price algorithm that both take into account the potentially very large ranges of fiat valuation of safecoin, as well as the inertia in allowing new vaults in.

Every time a vault responds to a GET, it includes its asked price [ie expected amount of reward].

A general comment, the use of ‘price’ vs ‘reward’ is a little confusing to me, maybe I’m just not used to it. Maybe ‘expected reward’ is a good substitute for ‘price’…? To my mind the use of word ‘price’ mixes the idea of storecost and reward too much.

One thing I don’t understand is for “20 % to fastest responder” - how is ‘fastest’ measured? Fastest to respond to the neighbour (so the neighbour decides) or fastest within the section (is there a ‘leader’ elected that gets to decide), or is it a statistical measure of overall latency? Maybe the pricing could be not done for every get but be a periodic poll hosted by a random elder so ‘fastest’ becomes ‘fastest to respond to the poll’… I dunno, just feeling that fastest implies some common baseline of measurement when I don’t see how that can exist in a decentralized way. Maybe it can…?

“it could be argued that vault operators will try to outbid eachother by repeatedly lowering the price, beyond reasonable valuation, to the detriment of all.”
It could also be argued that vault operators will try to raise the price beyond reasonable valuation. I’m not sure what the dynamics are here. My gut says to look closer into the relation of 20% reward to the cheapest vs the ‘default’ reward from divide(60%). I wonder if it’s possible to invert the expected behaviour or what happens in the extremities. For some reason I think back to the monero dynamic block size algorithm where being too far away from average causes punishment. Maybe allow vaults to set an extremely high price relative to other vaults, but if they do they’re punished for it. In the case of monero the punishment for making bigger blocks becomes acceptable because of the possibility for future rewards to be higher due to those bigger blocks which offsets the punishment.

Would you consider having some weight for node age or being an elder?

“Is it possible that large groups of operators would form, that coordinate their price bids as to manipulate the market? Can it be prevented somehow?”
I think yes this will happen, and your idea of using median pricing is a pretty good one for helping reduce the effect, especially if the vanilla default vault uses a sane price. Punishments for bidding too far from expected ranges might help.

Still absorbing it all but nice to read these innovative ideas.

8 Likes

Good point. I struggled a bit with this, but left it for later consideration.

Let’s se what we can do about it. How about this nomenclature?

Store cost

  • the cost of uploading a data chunk, paid by a client to the network, a prognosis of the data chunk’s lifetime number of GETs.

Bid

  • the bid for accepted price for a GET, that a vault operator uses to compete for the Reward.

Reward

  • the actual price of a GET, that the network then pays the vault operator.

The use of the word “price” is here from the perspective of the network (it has to pay the operator), the reward is from the perspective of the operator.

I thought it nice to split it up in three distinct names, as to clearly separate these three things; the cost, the bid and the reward.

What do you think?


This is from what I have perceived to be the supposed algo so far: The majority goes to the vault first responding to the GET. Maybe this was changed in RFC57? Didn’ look it up. Additionally, maybe there was never a concrete suggestion for how to implement it.
But I imagined that it would be part of verifying that a vault serves a GET, and done by the same means as that is done (Elders + parsec? I haven’t looked it up). The response time is compared between the members of the group. This needs to be cleared up, I’m sure someone who’s into the implementation of the Elder management of GETs can shed some light on this.

This could also be good from a performance perspective. With large volumes, we may have (seems highly likely) a good enough system when increasing granularity, and not sampling every GET. So, would need to be figured out how large of a tolerance to deviation can be accepted, and at what level of granularity that is expected to be achieved.

Yes, yes absolutely. Both directions for sure.

Yes, definitely, this was a simple first suggestion. It results in the two winners (number of winners could of course be made more rich/complex in tiebreaking situations if desired) in a group (cheapest & fastest) to receive double reward compared to all others.

The others need not be active in any way, they are guaranteed their share of the reward - simply by hosting data and following the rules. It seemed like a fair level of motivation for the cost of improving the network in these two ways:

  1. Providing [potentially much faster/more accurate] decentralized price discovery,
  2. Speeding up delivery

So, double or some other factor, that can be discussed. Maybe even dynamic by some measure? (In that case, important to figure out if the added complexity of dynamic adds enough value to be worth it.)

I am not sure I fully understand what you mean here, it sounds interesting so if you are able to expand a bit on it and feel it worth while it would be nice to hear more about it.

I think a simple way - although maybe not perfect (?) and maybe not the end solution used - is the median of section (or the apprx. network median, as by neighbor gossip). As long as the majority of users are distributed around some fairly reasonable value, extreme bidding will have no influence at all. Introducing punishment and so on would be a less preferred way I think, because of the added complexity. But maybe it ends up being a good solution as well.

This is part of RFC57 IIRC, and I left it out by purpose as to not complicate it too much in the beginning, looking at this specific dynamics in isolation to begin with. But I think the foundations of the rationale for this suggestion are sound. They are doing extra work, the age is a fundamental factor in the system security algorithms and so on, and there is a reason why to reward it.

Yep, I think it’s likely as well, without good measurements. And yup, both of those might be good. So one thing to look for is if these in isolation or combination are good enough, or if something else is needed in addition or can replace these.

With the RFC57 all GETs earnings are divided by the Vaults of the section according to the age.

2 Likes

Ah yes, I remembered it was something like that, didn’t know if speed to deliver had been excluded altogether though.

So, essentially, the parameters chosen determines what we incentivise. Age is probably quite good to include. Now above I explore the value of speed and market valuation as well.

Edit: Simplest thing would probably be to divide the remaining 60% among the 6 non-winners, according to age.
That makes speed and price discovery immediately profitable services to provide to the network (still at least 2 times more than any non-winner, regardless of its age), while also including the incentive for aging reliably on the network.

It seems to me that would work towards this aim:

Again though, the exact distribution is in the details, maybe not double, maybe dynamic, etc. etc.

1 Like