I agree in terms of optimal token economics for the network.
But, if the network creators feel it’s important to ensure early investors / holders’ SNT wealth is distributed over time to those who are farming, it may be considered worthwhile to sacrifice some network growth to achieve it.
Yes. With the doubling of StoreCost, the rate of depletion of the reserve / full dilution of SNT would depend on the velocity of SNT through the system. E.g. as soon as a total of 4.068bn (I think?) SNT have been spent on storage / farmed, the dilution will be complete.
That shouldn’t ever be necessary, as StoreCost will automatically adjust pricing for network users & therefore reward for providing resources so that the network never fills up too much (if network is getting too full, StoreCost goes up, so demand for storage is a little lower, and rewards to farmers a little higher so it should balance out).
That just stops ppl uploading as much, further hampering the network and putting it out of reach of those with less money, while doing nothing to encourage extra storage be added.
It doesn’t affect reward, reward is the farming rate, which cant go up as spare coins are all depleted
Not really - that mechanism simply finds the market price where users are willing to pay for storage, and farmers are willing to supply storage.
The actual demand for the network’s services, and actual cost of supplying resources to the network are external to the network’s internal economics and can’t be affected by the network, or at least not in the long term.
If at a given price, people are storing too much data & farmers are supplying too little capacity, the price needs to go up for things to balance, so it should be adjusted. If you wish farmers would provide more storage at price that is below their actual willingness to provide resources, you could temporarily subsidise them, but it won’t be a long term solution. Why not just let the market price be found?
The farming rate is directly related to the StoreCost, which will always be adjusted to ensure sufficient available resources.
From the RFC:
Establishing StoreCost
We’ll make the following definitions related to the numbers of nodes within a single section:
N = total number of nodes
F = number of currently full nodes (those whose last Put request failed because they’re full)
G = number of good nodes = N - F
We want to reduce the cost to store (and hence also the farming reward) when the number of good nodes increases, and also when the proportion of full nodes decreases. To that end, we’ll use the following formula:
StoreCost = 1/G + F/N
Currently, I think Farming Reward is 2x StoreCost, but once the reserve is depleted, it’d be equal to StoreCost, but will still vary acorrding to supply / demand by the StoreCost formula.
Here’s where I got it. It says it’s current and not superceeded, though it does say it’s fairly simple and will be reviewed based on test net results etc:
Stated by who though? I don’t remember any dev or founder explain the reasoning for the reserve, only heard opinions of forum users who think a reserve has some kind of place where they think following the market price would cause problems.
I think the RFC is the best place to look to guide a likely implementation, even if it doesn’t match up with what some community members think should be the case.
Again though, the RFC is clear that things could change, and it’s from August 2019, so who knows what the current thinking of the devs is on this issue.
When a client pays to store or mutate data, the payment will be immediately be divided amongst farmers
I can’t currently imagine any benefit of moving away from a mechanism that lets a true market price develop around a decided optimal level of spare capacity.
If farming rewards are ever disconnected from StoreCost, it will lead to an imbalance in the pricing & over / under supply of storage.
So, I quite like the RFC vs the common wisdom on the forum, except for the dilution, which I think may end up being counterproductive for network growth due to it reducing the expected future value of farming rewards.
We will update RFC 0012 where we alter the calculations on sacrificial chunks to that of relocated chunks.
Giving farming rewards as 2x store cost or even 1x store cost will never work since data can be retrieved more than once. Any reasonable thought is that there will be more GETS than PUTS on a network.
There is by necessity a logical disconnect from storing costs and farming rewards. Yes there is a mathematical connection since they use many of the same parameters to calculate the amounts. But the logical disconnect is forced by the facts that
Storing occurs prior to getting.
Storing the piece of data can be minutes or days or months or years before reading that data.
getting of the stored data can range from never to millions of times.
time of getting of said data will be spread out over some time
Thus the network will receive payments in advance and keep that in a store.
And the network will pay rewards when earned and is not related to any predictable time period.
And your point from before
I think this makes the most sense to me as a valid reason for the supply dilution; to ensure early investors / holders don’t hog a big quantity of the token supply over time, and those who contribute to the ongoing health of the network eventually get the biggest share.
is extremely important.
Also a buffer must exist at some time even if all coins are given out before hand. Otherwise some farmers will receive zero rewards for GETs. To give all coins out up front requires that more is received in store costs than given out for rewards (over entire network or section).
How could anyone guarantee that there will be more received in store costs than needed for rewards in a given time period. What if storing drops for a period say due to an event. And people are retrieving more data during event due to the nature of the event. Are farmers rewards going to drop to near zero or actually zero at times?
The fact there needs to be a buffer for these times raises the question of why not keep what is current and have the buffer able to handle decades of massive ups and downs rather than the network live hand to mouth.