Exploration of a live network economy

Yes I think you have.

In any case my point was that there was a calculation for FR (=1/FD) and the effective accounting for scarcity (unissued coins) was after that.

So without modeling or trying to justify the FR calculations I was pointing out the effect of having the scarcity calculation applied after FR is calculated.

Then store cost did not apply the scarcity effect, but rewards did. And the assumption is that as the coin becomes more and more scarce there is more and more data stored and more and more GETs, and with the scarcity of new coins the fiat price will creep up. Thus farmers can still farm economically.

The point of what I said was not to drive the store cost according to scarcity of the coin making it less likely people will store but rather by adjusting the farming rewards according to the number of coins issued solves the scarcity problem in a gradual way.

your idea for scarcity has the effects

  • fiat prices rises because new coin is more scarce
  • storecost in safecoin rises, doubly rises in fiat terms (what people will use typically)
  • as storecost rises the amount of uploads drops off (quicker as price rises more).
    • there is no relief for this because as uploads drop off there is less coin coming in, but rewards are still being paid for getting previous data. So coin becomes even more scarce and the rise in storecost and price means even lower uploads and more scarce coin.
  • new data stored drops off (drops off much more as cost increases more)
  • majority of data retrieved is newer data. @ 3 months old it is quite noticeable
  • demand for data increasingly drops off because there is less new data to look at

And this is a death spiral

By keeping storecost driven by the network needs for storage space and pre-adjusted farming rewards as well we can handle the scarcity problem in a much smoother way and if human nature and history can be relied upon then it should not spiral downwards and even if it did, its not a uncontrolled one

  • storecost based on factors not (significantly) including scarcity.
  • farming rewards calculated accordingly, not (significantly) including coin scarcity
  • farming rewards are adjusted for scarcity before payment. (eg FR * not issued/total)
  • network will be mature before scarcity is an issue (unless FR & SC are badly done)
  • fiat value of coin will increase as the scarcity increases over the life of the network.
  • fiat cost of storing will potentially gradually increase due to the coin value increasing, but if FR & SC done right then farmers increase which would counter act this by seeing lower cost in safecoin, lower farming rewards given out (but higher fiat value).

You must remember the life of the network is dependent (yes other things too) on people uploading and if you double whammy them increased SC & resulting fiat increase per coin, then you throttle that input and this reduces GETs along with reduced farming rewards means farmers leave. And neither help the network to recover. It just keeps getting worse less uploads less returned coin, still rewarding but a lot less due to less GETs and lower FR

2 Likes

Imo, as far as the network is concerned all events are the same, as they should be. The passage of “Time” is the sequentially ordered change of network state. Change of state is measured by parsec consensus events/blocks. There will be lot’s of gossip and chatter between events, but that is just subscale noise with respect to the concept of network time.

Per above, there is no jitter with respect to network “time”. A section that has no new consensus events and does change state does not “age”.

caveat: It’s been a while since I looked at parsec, so I’m sure there are more devils in the details. Experts feel free to interject/correct.

1 Like

You must remember, what I wrote, so that you know what I remember and not :slight_smile:
I don’t know why you are talking as if somehow I would not remember or understand this?

  1. This was mentioned already in that post in risks section.
  2. Then you said it.
  3. Then I reminded you that yes, I already mentioned it. And I came up with a solution to that specific issue.
  4. Now you say it again, as if 1-3 never happened. (Including the solution).

Look, instead of us repeating all of that (again), why don’t you explain this part instead:

That would be much more helpful, sincerely :slight_smile:

You have totally misunderstood what it does.

Before introducing cost multiplier, store cost C worked similar to RFC0012, in that it was relatively close to farming rate R.
By that, reaching 50 % issued coins was not happening (i.e. way too slow).
(The RFC0012 implementation didn’t even manage to receive all the uploads from initial 5 k users, as it would recycle more than it farmed. Max supply was overflown, the system failed.)

After introducing cost multiplier, C is much lower than R, and gradually increasing until it reaches R at 50% and stays there in relation to R (it doesn’t have to be exactly 1 at that point, but that is besides the point).
So any increase in price is from very low levels.

Totally contrary to what you say, storing is immensely cheap with this, in comparison to RFC0012, and assuming the storage works as it should, demand should be really high.

This way, the unfarmed coin actually does move towards 50%. But it still takes many years.

What the increase in price does, is to taper off this discount in price, and thus easing the curve down to 50 %.
You are just looking at the fact that there is a change upwards in store cost. Yes, but from very low levels towards the normal level - not from normal level and up above it. That makes the whole difference between your conclusion being applicable and not.

These results are both with 80:20 read write ratio, and the 98:2 ratio, which gives gives less recycling than 80:20, and a faster decrease of unfarmed coins.

This is what I was working with and not

I was looking at the specifics of most to all coins issued.

This is contrary to my simulations a couple of years ago for that outdated algorithm

Yep I know, this is the solution introduced right after mentioning that

in effect it looks like this

        decimal CostMultiplier(Network network)
        {
            if (network.UnfarmedCoins > 0.5m)
                return (1 / network.UnfarmedCoins) - 1;
            else return 1;
        }

I haven’t gone so many years in yet that I am even close to 50 % yet though.

The sought effect though is to skew the ratio of reward, during the time u is very high, to being paid more by the network, and then gradually easing over to being paid more by the actual consumers of storage (i.e. with the PUTs). Another way of saying it is we want to start out by issuing more coins from the network as to reach that desired balance, not find a stable state at high levels of unissued coins (or leave that zone too slow).

I’m guessing it’s the initial parameters then. I can look closer at this later by tweaking those and see if it lifts from the ground.

But here is the code, if you can maybe spot what works differently
public class RFC12 : FarmingAlgo
    {
        /// <summary>
        /// From RFC12:
        /// Each successful GET will generate a unique identifier.
        /// This identifier will be used as the dividend in a modulo operation 
        /// with the FD as the corresponding divisor. 
        /// If this operation results in 0 then farming attempt is successful.
        /// </summary>
        /// <param name="network"></param>
        /// <returns></returns>
        public override Coins FarmingReward(Network network)
        {
            var fr = FarmingRate(network);
            var fd = (ulong)(1 / fr);

            // From RFC0012:
            // 1. Get request for Chunk X is received.
            // 2. The DataManagers will request the chunk from the ManagedNodes holding this chunk.
            // 3. The ManagedNodes will send the chunk with their wallet address included.
            // 4. The DataManagers will then take the address of each DataManager in the QUORUM.
            // 5. This is hashed with the chunk name and PmidHolder name.
            // This is basically the same as returning a random number.
            var xor = new XorName();
            if (!BigInteger.TryParse(fd.ToString(), out BigInteger fdAsBigInt))
                throw new Exception();
            var remainder = BigInteger.Remainder(xor.Address, fdAsBigInt);
            if (remainder == 0) // RFC0012: If this result % farming divisor (modulo divides) yields zero then
            {
                //if (network.UnfarmedCoins > (decimal)StaticRandom.NextDouble()) // only necessary when coin is not divisible
                //    return Coins.One;

                return network.UnfarmedCoins * Coins.One; // coin implementation is divisible, so this gives statistically the same result
            }

            return Coins.Zero;
        }

        public override Coins StoreCost(Network network)
        {
            var fr = FarmingRate(network);
            var sc = Coins.NANOS_PER_COIN * fr * network.Sections.Count; //RFC0012: StoreCost = FR * NC / GROUP_SIZE
            return new Coins((ulong)sc);

            // return new Coins((ulong)(1 / Math.Pow(fr, 0.2))); // RFC0005: int cost_per_Mb = 1/(FR^(1/5));
        }

        /// <summary>
        /// if TP > TS {
        ///     FR = 1 - (TS / TP)
        /// } else {
        ///     FR = approximately 0
        /// }
        /// TP > TS means PercentFilled > 50 %
        /// </summary>
        /// <param name="network"></param>
        /// <returns></returns>
        double FarmingRate(Network network)
        {
            //var fd = TP > TS ?
            //    1 - (TP / (TP - TS) : 0;
            // var fr = 1 / (double)fd; // RFC0012: FR = 1 / FD

            var fr = network.PercentFilled > 0.5m ?
                0 : 1 - network.PercentFilled;

            return (double)fr;
        }
    }

Next thing I am about to try model change in growth rate of vaults in response to fluctuations in farming reward, and changes to non-vault client growth rate in response to store cost.
It will be hard (to not say impossible) to make an truly realistic representation, but it will at least introduce that influence - in the simulation as well - on the rate of users coming / leaving which the price changes are meant to do.

From another thread:

Oh, btw, to address this part also. I interpret the list of points as requirements, followed by their results. I don’t actually know if you mean that these results are reached only by fulfilling these requirements. But I assume that you mean to say that current model being worked with here, do not reach these results, or for that matter fulfill these requirements.

I disagree, so I’ll comment on that. First the points:

Recap of the variables:
d = Data stored percent
u = Unfarmed coins percent
R = Farming reward
C = Store cost

Why I disagree:

  1. d is the significant factor in this current calculation of C, i.e. storage used. C is not significantly including scarcity. The fact that we want to reach 50 % issued within a reasonable time, makes us use the error influence C, as a way to create a gradually declining net issuance of coins.
    Scarcity is not the point. Reaching the target of 50 % issued coins, is the point.
    If you have a suggestion for how we can steer the level of issued coins, without including the actual value of issued coins (that which you call ‘including scarcity’) I’d be happy to hear it.
  2. d is the significant factor also in current calculation of R (which is why it is so in C, since C is basically R, adjusted by u).
  3. This is actually done. R is farming reward which is equivalent to what you call payment. FR means farming rate, which is not equivalent to R. I’m not sure I get what you are trying to say with this, please if you can elaborate on the rationale here, the intended results, why specifically by that method.
  4. In the model currently worked with in this topic, network will be mature before scarcity is an issue. In fact, all models I’ve currently looked at, actually reaching scarcity within reasonable time, is the real issue.
  5. This particular point seems like a general observation on any model. Nothing to disagree with.
  6. Yeah I think this is at the core of what is being discussed in the theory for current model, since post one. It’s all about getting C and R right as to counter act the potential effects on them as factors change. It’s also about the problem of how to model fiat value change. It is being done indirectly here, by assuming that some characteristics of the network would imply something about fiat value of the coin. That might be wrong, but it’s part of the exploration.

If you have a more concrete idea you could put it here and I’ll feed it into the simulation. It would be interesting to see.
It only needs to implement FarmingReward() and StoreCost(). So you provide formulas for those two. I’d be able to simulate a couple of years in a few minutes, given that there is not a very large population increase.


Actually, no time is needed. Neither is using parsec consensus events as ticks of a clock necessary.
The sampling is done at every n GETs, where we could set n = 1 if performance is not a problem, or increase it otherwise.
Since this is calculated at section level, and there should be a roughly uniform random distribution of data, thus also of GETs, each section would get a proportional part of the total activity at a roughly average rate.
This depends on the random distribution being roughly uniform though. I have not yet had confirmed (have asked about it before) that it will not follow some normal distribution for example, in which case there would be a certain number of sections that did give both wildly different values for C and R (among them), but also seeing jitter if using a PID control mechanism.

(There are some complicating factors to consider though, such as: Will older sections have older members? In that case maybe holding higher proportion of older data, and by that receiving less GETs? Maybe something else as well.)

1 Like

I would disagree. The number of gets and puts per temporal duration is required to form the Integral and derivative components of the pid controller. Your modeling is essentially a ‘p’ or proportional component of the pid. The other question that comes to mind is optimization of the controller. The farming rates and put cost need to be modified/updated by the network with respect to its objectives. The main objectives that come to mind are maximum growth for minimum cost.

1 Like

Nopes. The integral part does not need time at all it just needs the previous errors.
The derivative part could use time, some do, but it is not at all necessary. Every GET is the tick. So the dt is 1 GET. If sampled every n GETs, it is n ticks.

If we assume uniform distribution of xor addresses, then follows uniform distribution of data and of GETs, and the GET frequency will be averaged out over sections, forming a quite steady tick.

Now, I have often wondered if it is not rather normal distribution on the xor addresses. It would still work, but with more variation.

1 Like

Then how would the pid adjust for when gets are ramping up fast, such as huge spikes in traffic?
Gets alone are not a sufficient tick tock.

1 Like

Well, I guess that’s true. One other event then, to get GETs relative to that event. They should preferably not be correlated. But small correlation would still work.

EDIT: No, I take that back :smile: just woke up and did not have coffee yet. We’re not controlling the GETs here. So if GETs ramp up, it is still the change in what ever value we are monitoring that we look for. So the unit of t is 1 GET.
The value we monitor is X. So dx/dt is the derivative, and it will be displayed at maximum granularity of ‘time’. (Unless we only sample every n GETs, then we pass in n as time passed)

Do you have any idea specifically what would be monitored and what the PID output would be fed into?

1 Like

Yes, relative to the tick tock of parsec consensus events.

But we are. The rate of Gets and Puts are a measure of network popularity/use by clients. The network needs to grow to survive. Get rewards and put costs are the incentives it provides to users to achieve maximum growth.
If we come up with some exponential function as a targeted network growth rate, then a classical pid controller would manipulate the pricing and reward rates in real time to minimize the error between the current network growth rate and the targeted growth rate.

I may have gone too far with the pid controller analogy. There are many ways the pricing/reward algos can be done to try and achieve optimal performance given multiple objectives. I just thought that good place to start drawing inspiration was the classic textbook pid controller.

2 Likes

Not having read too much here (sorry very busy lately) - only saw something about pid and missing parts - just wanting to throw in that if the thing you want to control has integrating properties you don’t necessarily need an I part for eliminating remaining deviations and the other question would be ‘what harm does it do to have a remaining deviation’ (and just getting a non oscillating p controller (or pure I) should be way easier than with including all 3 parts … We’re talking about a crazily complex system here…)

3 Likes

Okay, well that would be one thing we could work with. But GETs cannot be SetPoint or Process variable, because we can’t control them, only weakly indirectly through PUTs (which would supposedly be leading to new content that attracts GETs).

What would encourage or discourage GETs? Hard to control that. What is even a target value for GETs?
PUTs is easier to control, by increasing and decreasing C.

But growth would be measured in client and vault count I think, as well as provided storage size and used storage size.

Node count we can control on one end (upper end). Storage size as well. Not on the lower end.
It seems to me that it is very hard to have a SetPoint in form of a growth rate or size of the network.
One thing I have discovered is that to achieve balance of for example storage used several things must balance:

  • PUT rate (in terms of 1Mb chunks for simplicity)
  • New nodes rate
  • Storage size per node
    The two latter simplifies into:
  • Storage added / removed rate

If these do not eventually match over time, the storage used will not settle at 50 %, and C and R will be affected accordingly.

In my simulations the hard part is to reach that balance. I have to model a behaviour that adapts to price changes and supplies the right amount of new storage to match the PUT rate, and vice versa - in a way that seems realistic, like a market. In reality we can only trust the market to do this for us. But if the market is irratic, or dysfunctional, the system will spiral off. And it could give self reinforced effects that breaks it. That, I see as something that must be avoided. The system must be resilient to it. If it is even possible, I don’t know that yet.

I think it was a great idea to get inspiration or variation in the thinking around this problem. I agree that it might not be optimal or useful here, because I don’t yet see how it would be used.

Am a bit distracted here so a bit short / incomplete.

3 Likes

PtP rewards and PtC rewards. Apologies, I started thinking off topic and out of scope. Lately I see farming rewards (PtF), producer rewards (PtP), and consumer/client rewards (PtC) as a package deal for optimal network growth…

True we cannot control the get rate directly, but rather indirectly and only weakly, but it is a process input. True that things like vault count and free space are the prinary variables, but If trend emerges where get rates reduce then there is a problem that the network will need to adapt to. Maybe this is more of an edge case, but the scenario should not be overlooked.

One simple way handle reduced gets is to implement the recently proposed ‘Pay on PUT’ reward method. Pay on put and pay on get can work together to keep the farmers happy at both edge cases (99:1 put/get ratio) and (1:99 put/get ratio) It reduces the need for the network to dip into its savings account to handle imbalances.

2 Likes

Exploration of a live network economy

(Edits made to update for an error in the data)

Next iteration

Each iteration of models will be given a sequential number, and we will simply call them Version 1 .. n. The first proposal was Version 1.

Version 1

Results from v1 simulations are available online in this excel.

After days of exploring the area around where the needle was dropped on the map, I have gathered some insights about that specific solution, which makes me believe that there is some fundamental problem with it. So by that, it is time move on to some other location on the map.

Insights

It was interesting to try out what effect on the system these interrelations would give. Digging down into this model, and working with implementing the simulation, gave a better understanding of the various parts. The untried combinations of variables, and tuning of values are still vast, however, as initially suspected, one of the problems was that there were so many variables included, which led to difficulties in tuning and - as it seems - difficulties in reaching stability of the system.
For that reason, moving on to another version, we want to try something more light weight.
I do not rule out the usefulness of some iteration of Version 1, however I feel I want to continue the exploration, and perhaps move back in this direction at some other point in time.

Version 2

(Results from v2 simulations can be found online in this excel.)

Browsing earlier suggestions, I decided to implement a suggestion for StoreCost from way back in 2015.
For FarmingReward, I did a modification of RFC0012. The reason I did not take RFC0012 directly, is that I have not been able to reproduce a random success rate of modulo of a random hash and FarmingDivisor, that both correspond to the RFC and gives a result that does not blow up. I can discuss this topic further if someone is interested. But basically, since coins are now divisible, all I needed was the sought success rate, by which I could just multiply with that number, as to get the equivalent of the probabilistic reward.

So, let’s look at the modified R.

First, a recap of variables:
s = Sections count
f = Filled storage percent (formerly d)
u = Unfarmed coins percent
R = Farming reward
C = Store cost
b = nanosafes per coin

FarmingReward R

The reward is composed as follows:

R = (u * f * b) / s

And the code:

public override Coins FarmingReward(Network network)
{
    return network.UnfarmedCoins * network.PercentFilled * (1 / (decimal)network.Sections.Count) * Coins.One;
}

Farming reward in RFC0012 includes f in calculation of FarmingRate, and then does a probabilistic execution based on a random hash and the inverted FarmingRate (i.e. FarmingDivisor), and then another probabilistic execution by checking if the coin exists or not, so basically based on u.

The difference now, is that we have replaced the first probabilistic part, with a success rate that is proportional to how much storage is used. They are not equivalent, but RFC0012 does base this part on the same variable. However, due to not being able to properly estimate the statistical outcome of that solution, this change was introduced instead.
The second probabilistic part is equivalent, and it is giving a success rate based on percent of unfarmed coins, which over time should yield about the same results as trying to create a coin at an adress not already existing.
In addition to these, the element of network size has been included, as to give gradual decrease of R as network grows. Simply, the reward is divided by number of sections. The motivation for this is the same as in Version 1.

StoreCost

This is an exact implementation of a suggestion made by @Seneca in 2015: Early demand, and effect on storage costs, after launch - #24 by Seneca

The following is an excerpt:

  1. Close groups must track the total amount of safecoin issued as rewards, and must track the total amount of safecoins absorbed from PUTs
  2. Based on the amount of issued safecoinsand the total amount of safecoinsin circulation, a target figure of total amount of safecoinsabsorbed can be computed
  3. On every PUT, if the actual amount of safecoinsabsorbed is lower than the target figure, the PUT cost is increased, else if the actual amount of safecoinsabsorbed is higher than the target figure, PUT price is decreased.

Let’s put it together. Step 1 and 2:

I = total SafeCoins issued
A = total SafeCoins absorbed
TA = target total SafeCoins absorbed
S = supply of SafeCoin (0.0-1.0)

TA = I * S

S makes sure that the rate of increase in SafeCoin supply (i.e. inflation) tapers off as we approach the cap. At the cap, S == 1.0 , so then the target total SafeCoins absorbed is exactly equal to total SafeCoins issued . Since there may be times when the farming rate algorithm suddenly has to increase rewards, we probably want to keep a buffer of reserve SafeCoins for such times. If we want to keep 10% of SafeCoins in reserve, the formula becomes:

TA = I * (S + 0.1)

Step 3:

MB/SC = Megabytes bought per SafeCoin

if (TA > A) {            //fewer SafeCoins have been absorbed than we want
    MB/SC--;             //So increase the PUT price to start absorbing more of them
} else if (TA < A) {     //More SafeCoins have been absorbed than we want
    MB/SC++;             //So decrease the PUT price to start absorbing less of them
}

And the code implementation for this is:

public override Coins StoreCost(Network network)
{
        var targetTotalSafecoinRecycled = (network.CoinsSupply + 0.1m) * network.TotalPaid;
        if (targetTotalSafecoinRecycled > network.TotalPaid)
                --_chunksPerSafecoin;
        else if (network.TotalPaid > targetTotalSafecoinRecycled)
                ++_chunksPerSafecoin;
        return new Coins(Coins.One.Value / _chunksPerSafecoin);
}

where _chunksPerSafecoin is initialised to 11 134 nanosafes per MB (based on preliminary voting results in Polls: How much will you spend? How much storage do you need? etc)

You might spot the constant 0.1 on the first line in the code block, this is telling that we are aiming for 10 % buffer of unfarmed coins. It is from the proposal, but we will also try this with the previously stated goal of 50 % unfarmed kept as buffer.

Here is rest of the post:

A great benefit of this approach is that we actually have control over inflation now. Unlike in BTC where the inflation rate is a function of time (block count), with this algorithm the inflation rate is a function of usage of network resources. More usage (growth of the network) increases the inflation rate, less usage decreases the inflation rate.

Since we start with 30%(?) of SafeCoins already in existence, I should be initialized at 0.3 * 2^32, and A should probably be initialized so that TA == A where S = 0.3 .

MB/SC can be initialized at a guesstimate number, the algorithm would quickly correct it to the right value.

Method

Initial values

InitialUsers: 5000
TotalSupply: 4294967296:0
InitialSupply: 644245094:367787776
InitialUserCoins: 128849:18873557
Unfarmed: 3650722201:632215000
InitialUserChunks: 100000
InitialUsersPerVault: 3
ReadWriteRatio: ReadWriteRatioNo3
UsersPerVaultRatio: UsersPerVaultRatioNo3
ActionsPerUserDay: 100
GrowthRate: DemandBasedNo6
FarmingAlgo: RFC12Seneca
CachedFarmingAlgo: True
VaultSize: 500000
DaysSimulated: 3650

Simulation code

    public void Start()
    {
        var days = Parameters.DaysSimulated;
        var actionsPerUserDay = Parameters.ActionsPerUserDay;
        var growthRate = Parameters.GrowthRate;

        var sw = new Stopwatch();

        Report(-1, 0);

        for (int i = 0; i < days; i++)
        {
            sw.Restart();

            var nodeCount = _network.TotalNodeCount;
            var newVaults = (int)(nodeCount * growthRate.GetRateFor(i, _network));
            for (int j = 0; j < newVaults; j++)
                _network.AddVault();

            var totalVaults = nodeCount + newVaults;
            var usersPerVault = Parameters.UsersPerVaultRatio.GetRatioFor(i, _network);
            var totalUsers = (int)(usersPerVault * totalVaults);

            Parallel.For(0, totalUsers, s => Action(i, actionsPerUserDay));

            sw.Stop();
            
            Report(i, sw.ElapsedMilliseconds);

            TryResetCache();
        }

        Output();
    }

and

    void Action(int day, long actionsPerUserDay)
    {
        var ratio = Parameters.ReadWriteRatio.GetRatioFor(day, _network);
        if (ratio > StaticRandom.NextDouble())
            _network.Get(actionsPerUserDay);
        else _network.Put(actionsPerUserDay);
    }

Market model

Growth rate

public class DemandBasedNo6 : GrowthRate
{
    readonly UsersPerVaultRatioNo3 _usersPerVaultRatio = new UsersPerVaultRatioNo3(3);
    const int year = 365;

    public override double GetRateFor(int day, Network network)
    {
        var d = (double)network.PercentFilled;
        var f = Math.Pow(d + 1, 2) - 1;
        var disallowMultiplier = d * f;
        return disallowMultiplier * GrowthRateMultiplier(day, network) * DailyRate(day);
    }

    double GrowthRateMultiplier(int day, Network network)
    {
        var c = network.StoreCost();
        var r = network.FarmingReward();
        var t = (double)(c / r);
        var m = 1 - t;
        var u = _usersPerVaultRatio.GetRatioFor(day, network);
        return day >= 365 ? m * u : m * (365 - day);
    }

    double DailyRate(int day) => YearlyRate(day) / year;
    double YearlyRate(int day)
    {
        if (365 > day) return 0.20;
        else if (730 > day) return 0.16;
        else return 0.12;
    }
}

Users per vault

class UsersPerVaultRatioNo3 : UsersPerVaultRatio
{
    readonly InitialRatioTimeChangeNo1 _ratioTimeChange;
    readonly ReadWriteRatioNo3 _demand = new ReadWriteRatioNo3();

    public UsersPerVaultRatioNo3(double initialRatio)
        => _ratioTimeChange = new InitialRatioTimeChangeNo1(initialRatio);

    public override double GetRatioFor(int day, Network network)
    {
        var c = network.StoreCost();
        var r = network.FarmingReward();
        var demandWeight = (double)(r / c);
        var g = GrowthRateMultiplier(day, network);
        return demandWeight * g * _ratioTimeChange.GetRatio(day);
    }

    double GrowthRateMultiplier(int day, Network network)
        => 1 - _demand.GetRatioFor(day, network);
}

class InitialRatioTimeChangeNo1
{
    readonly double _initialRatio;

    public InitialRatioTimeChangeNo1(double initialRatio)
        => _initialRatio = initialRatio;

    public double GetRatio(int day)
    {
        if (180 >= day) return _initialRatio;
        else if (day > 180 && 365 > day) return 2.2 * _initialRatio;
        else return 3.3 * _initialRatio;
    }
}

Read-write ratio

class ReadWriteRatioNo3 : ReadWriteRatio
{
    public override double GetRatioFor(int day, Network network)
    {
        var c = network.StoreCost();
        var r = network.FarmingReward();
        var w = (double)(c / r);
        var t = w * Math.Pow(1 + w, 2);
        return Sigmoid(t);
    }

    double Sigmoid(double x) => 1 / (1 + Math.Pow(Math.E, -x));
}

Results

Data points can be found online in this excel.

Discussion

Storage

Filled storage percent f rose up to near 45 % quite sharply and then stayed roughly there all through the 10 years simulated. A slight decline was seen from that point, which is not in accordance with the target of balancing around 50 %, so this would indicate that the models need further work.

Market

Even though models were used for GrowthRate, UsersPerVault and ReadWriteRatio (that had been fine tuned in Version 1 as to model something resembling a realistic market, responding to changes in price of storage as well as reward) it was surprising to see that the system was very stable already in the very first simulation, with no tweaking done. It seems likely this can be attributed to the previous work with fine tuning models.

Growth rate

A growth rate of 12 % per year, after passing the very early stages of the network (2 years), seems like a reasonable rate. This would be roughly the growth in internet users seen between 2005 and 2007.
Later stages of the network (>10 years) would probably much like internet adoption, see additional decline in yearly growth rate.

Clients

The number of clients so sharply rising, and then quite steadily falling, is a result of initially very cheap store cost - as determined by the ratio of C to R (this is a model of how cheap C is) and a gradual evolution of the network with lots of uploads, in combination with a modeled decline in users per vault ratio.
One could argue that the decline in users per vault is not realistic. The idea has been that it is due to an increased adoption of the practice to run a vault. It would seem that there would instead be a transition of that initial mass of clients, into vaults, which is not what we see. Instead these clients are in the later stages of the simulation no longer users of the network. A maybe far fetched after-construction, could be that these are users that took advantage of the very cheap storage costs to upload a lot of data, but that later are not using the services of the network to any larger extent. Rather, they just keep their backups, for some undefined point in the future.

Store cost

This number is quite steady, and all the time significantly below the farming reward. If we assume the farming reward to be baseline indicator of safecoin fiat value (meaning that as it decreases, it indicates that the fiat value increases) we can conclude that Version 2 - as intended - gives an initially very cheap store cost, and sees a gradual increase towards the real market value.

Farming reward

An initial spike up to 6 million nanosafes per GET (0.006 safecoin) at day 7 after launch, is followed by a sharp drop to 218.000 nanosafes after about 323 days. A 96 % drop in less than a year. We then see a surge, which is the result of the market model, after which a slower and steady decline is seen till the end of the simulation.

Unfarmed coins

Also in this model, it seems it will take many many years before we get close to 50 %. This simulation used 10 % as a buffer target, which would take even longer to reach.

Vaults

It would seem like 810.000 vaults in 10 years is a bit of a pessimistic estimation of growth.
Previous simulations have gone up to a maximum of 24 million vaults (and 1.2 billion clients) in only 2 years. Simulations take much longer with that size, and reaching 10 years with such a large population would probably take weeks, and maybe even months or more with a continued growth. It is hard to say what is a realistic adoption rate. But it seems fair to believe the number is somewhere between these two values.

Further work

Improvements on the model of user behavior and the market, are definitely needed. These are very primitive models. Preferably models based on various observations, data sources and perhaps existing work in a similar domain. It would be desirable to try various levels of irrationality and dysfunction of the market, as to determine the resilience of the economy model.

9 Likes

So after ten years (the current lifetime of bitcoin) it looks like there’s still around 85% unfarmed coins… so only 5% are farmed in 10 years (10% ICO + 5% farmed)… Am I reading the chart correctly? Seems like a very low reward rate.

Would be good to see a chart with total rewards issued as well as unfarmed coins remaining. This would give some visual indication of the recycling concept.

Interesting investigations thanks for posting!

5 Likes

Might velocity of tokens be useful somehow?

From that you get this equation for the price of a coin

C=TH/M

C = price
T = transaction volume
H = average holding time per coin before making a transaction 
M = total number of coins

velocity of the coin is inversely proportional to the value of the token i.e the longer people hold the token for, the higher the price of each token. This is intuitive, because if the transactional activity of an economy is $100 billion (for the year) and coins circulate 10 times each over the course of the year, then the collective value of the coins is $10 billion. If they circulate 100 times, then the collective coins are worth $1 billion.

2 Likes

Actually, there is a simple mistake here :confused: , that’s why we see 93.333 % as start. It should say slightly more than 85 % at start. I didn’t check that number as I thought it was due to initial uploads, but far from it :frowning_face:.
I fixed it and ran again, and placed in a new sheet in the excel. The previous is kept (called Faulty) for reference.
OK, so that’s good, because these results are better, I was a little bit confused and concerned about that ratio. There was not much changed in the results, other than the unfarmed percent being down to 83.37 % after 10 years now.

(I updated the post above with the correct data, as to minimize confusion for readers. Also including the requested data on rewards issued.).

Here are the updated graphs, including data about total paid and total farmed, as well as a chart for a single vault (of 500GB) revenue per day for the first 30 days. (Complete data is found in the excel).

So, It starts out with 15 % from ICO (I don’t have the details but it seems there were some additional 5 % issued). So, we start at 85 % unfarmed. Then the 5k users upload their 100Gb worth of data, at a starting price of almost 0.09 safecoin per GB (which is due to _chunksPerSafeCoin initialised to 11134 nanos, which is a value I exctracted from the poll here on the forum, on what users are prepared to pay per GB for their first data on the network).

A vault that joins on day one, will earn about 2416 safecoins / TB over 10 years (of which 2/3 is earned the first 30 days).

I also think that the reward rate seems a bit low, mostly because we only net farm 1.63 % of all safecoins during first 10 years (with a very low population though).
I don’t know what would be a desirable rate, but since the rate of net farming would supposedly decline as we get closer to 50 %, it seems we can allow for a bit speedier initial issuance. While that might affect inflation, it also would motivate farmers to join I think?
If we consider the C / R ratio an indicator of how cheap storage is, it would perhaps also indicate that storage is even cheaper initially, which would presumably also attract more data uploads. (Not entirely sure about that assumption, i.e. the C / R ratio as indicator of how cheap storage is, as we also increased inflation, but that is how the market is modeled here anyway).

So, I’m just going to guesstimate a desired net issuance of 10-20 % in 10 years? The next 10-20 % might take 20 years.

Another thing is that I wonder if we really need 50 % unfarmed considering the scarcity and the declining farming rewards. 10 % might be plenty good buffer for the network (as per the proposal by Seneca) - or maybe 20 %. But 50 % seems too much I think.

It might. I was actually going to try that idea out later, as it was also mentioned by @digipl already back in 2015: Safecoin VS SAFE Storage - #31 by digipl, and again in the same topic as Senecas store cost idea Early demand, and effect on storage costs, after launch - #34 by digipl making the point that the two ideas are actually similar.

I’m not sure though if we will be able to implement it that way. This part … :
H = average holding time per coin before making a transaction
… seems like it could be a bit tricky to track.

5 Likes

Version 2, iteration 2-5

s = Sections count
f = Filled storage percent ( formerly d )
u = Unfarmed coins percent
R = Farming reward
C = Store cost
b = nanosafes per coin

Increasing Farming reward

In the first iteration (disregarding the one with faulty data), we saw a low net farmed after 10 years, only 1.63 %.
It was noted that the farming rate seemed too low.

An approximation of desired net farmed was set to 10-20 %.
Calculation of R was then changed slightly, as to achieve this

From iteration 1 (i1) we had:

R = (u * f * b) / s;

As to increase R we then make the divisor smaller, but keep s, so in iteration 5 (i5) we make it a function of s:

q = ln(s)^3.5
R = (u * f * b) / q;

in code:

var divisor = (decimal)Math.Pow(Math.Log(network.Sections.Count), 3.5);
return network.UnfarmedCoins * network.PercentFilled * Coins.One * (1 / divisor) ;

Since the market model is designed such, that higher R to C ratio, leads to increase in vault population, these simulations take a longer time to complete.

Discarded iterations

Iterations 2-4 contained optimizations that did not work out well.
The optimizations consisted of batching all users’ reads and writes of a day, without calculating the costs after each individual user. (It had previously been optimized in that the actions of a single user were batched per day, without calculating the cost for each single action.)
The results of this however turned out to be an initial store cost of 89k nanosafes instead of 61k. Some, but not all curves looked similar afterwards. (These sheets are kept for reference in the doc, name appended with (optm)).
No attempts to solve this has been done yet, but is desired, since time to simulate 10 years increases from about 30 minutes (less than 1M vaults) to a few hours when we see populations of several million vaults.

Results

(Results from v2 simulations can be found online in this excel.)

Iteration 5

Comparisons of iteration 1 and 5

Discussion

Accrued safecoins per vault is lower in i5 (61.6 %), although total farmed is much higher (7 times). This is due to the much higher number of vaults.
Total vault count is 6.87M in i5, which is almost 8.5 times higher than i1, while client count is 18.5 times higher. While i1 had a pessimistic outcome of usage, i5 usage is perhaps best described as conservative.
Storage percent filled is slightly higher than (i1), at 49 % instead of near 48 %. The curve is more or less identical otherwise, still having a steady but small decline after reaching about 50 %.
Interestingly, store cost is almost identical between i1 and i5.

Main goal of this iteration, was to speed up net farming, and this was achieved as i5 reached 74 % - a net farming of almost 11 % of coins, compared to 1.63 % for i1.
This is not mainly a result of higher farming reward (+ 24 %), but of the market model that assumes store cost to be cheaper in fiat terms, as the ratio C / R is smaller in i5. (Edit: To clarify, the outcome is affected three-fold: the cheaper C is set to trigger a higher client on-boarding, it is also set to decrease read-write ratio of clients, and ultimately it is set to increase the growth rate of vaults, as to match the increase in clients with increased write ratio.)

What we have seen is with other words, that it is the increased user base that gives the higher net farming. That the user base is larger in i5 than in i1, is merely a result of this specific market model design. We would see similar increase in net farmed, if we increased users directly for example (instead of indirectly via slightly increasing farming reward).

8 Likes