(This post was originally in RFC 57 Safecoin revised topic, but since it is not about RFC 57, it gets its own topic).
As you might already know, the temporary farming algorithm proposed for test-networks is not suited for the live network. It is good enough for the MVP and tests and so on, but I’ve always wanted to delve into real attempts at devising an elegant and intelligent rewarding system.
An intelligent rewarding system could perhaps be described like this:
- Gives the right incentives,
- Has the right amount of buffering capacity for various events
- Has an onset of compensatory effects that comes at just the right pace.
Here I start out somewhere, drop the needle on the map, and work from there.
Some things from original Safecoin RFC is brought back in, such as recycling of coins, and some is based on the newer RFC 57.
I think that what we need to do is make the network be able to easily adjust reward R and store cost C as outside markets decide on a fiat value of safecoin. I think that increasing the weight of data stored d improves the responsiveness in this regard, which should make it easier for the network to adapt to such market valuation changes, as it then can better incentivise people to add / remove storage to the network.
Exploration of a live network farming reward
d = data percent stored of available u = unfarmed coins s = sections count m = 1 / s n = neighbor sections' median vault count [1, 200] x = 1 / n p = ds + 1 g = ln(p^3.2)
R = gxmd(d + u + 1)
C = 2R
The farming reward is paid on
PUT payments go directly to the section of the associated data. This is a recovery of farmed coins, into the unfarmed balance. (It is the first part of having the network keep a buffer for balancing dynamics).
This requires a simple and minimal algorithm, for rebalancing between sections. It can be done in many ways, but for example:
Whenever a section cannot afford to reward a GET, it will request of its N/3 neighbors of highest value unfarmed balance, to send 1/4 of their unfarmed balance (where N = neighbor count).
This would supposedly be a relatively rare operation, and have a low impact both at execution and on the general workload.
The farming reward
R = gxmd(d + u + 1) can be tweaked in endless ways as to achieve the desired properties, this here is a first draft.
Same goes for
C = 2R, which was additionally just an arbitrary value. Most consideration went into devicing
Unfarmed coins percent u
The fewer there are left, the lower R should be.
Data stored percent d
The higher the percentage filled, the higher R should be.
The more sections there are, the more valuable the network (and thus its currency) is, and the lower R becomes. Due to rules of split and merge (bounded range for section size), number of sections is a good indicator of network size.
Neighbour median node count x
Approximates node count per section. R increases as approximated number of nodes in sections decrease.
The reward unit for a rewardable operation. Currently the only such is a
GET, and one
GET is rewarded with R safecoins.
Store cost C
PUT operation will return 2R to the network, and thus increase unfarmed coin u and data stored d.
This reward algo assumes that network size is an indicator of Safecoin value. When few machines are running, we assume there are little economic margin associated with it. Many machines running, would indicate that it is very attractive to run a machine, and thus we assume that it is lucrative, with high margins.
Additionally, we assume that a larger network is an indicator of increased adoption and breakthrough of the technology and evidence of its larger usefulness and value. We don’t need to know in what way, the sheer increase in number indicates that it is more useful in any number of ways, collectively making it more valuable to society, and thus its currency is more valuable.
This metric is essential to include in the farming algorithm as to properly balance the payout with regards to the inherent value of the currency, and also maintain the value of unfarmed coins.(This here is the other part of network keeping a buffer of value for various dynamics with regards to members and performance, instead of depleting and losing that influence).
Median section node count
The variable x acts as stimulant of security; we can by this motivate inflow of new nodes when nodes for some reason are leaving the sections. When sections decrease in size (i.e. x decreases), their security also decreases, therefore it is motivated to inversely correlate the reward R with x, as to stimulate new nodes to join.
Weights of u and d
It seems likely that u would stabilize around some value.
It might be the case that it is a reason to give u higher weight, so that the smaller fluctuations, can still be reflected in the farming rewards (and storage costs).
Another way to look at it would be that since d would probably at times see larger deviations from any stable point than u does, it is an influence on the network that need a larger compensatory capacity, hence a greater weight to d .
After trying both, I for now settled on d.
Having a store cost use a constant multiplier of 2, is an arbitrary value for the moment. Much more consideration could be put into the consequences of this, as well as of any other constant or variable used instead.
The behavior of the algorithm
R is less responsive to the variation of u than that of d.
The influence of s is large at very low values, and small at very large values. This translates into a maximum of R =
6.654 safecoins when section count s =
1, node count n =
1, d and u =
1 (node is full and no coin has been farmed).
Day of launch
Let’s see what vaults could perhaps expect to earn at day of launch with this function.
~10% gives u =
Number of vaults we expect to be roughly number of members on forum which now is
That gives s =
50, as we set n =
Since it’s early we set d =
This gives R =
0.00064 (640 000 nanosafes)
With safecoin at 1$ and 1TB of storage filled to 20% with 1Mb chunks, and perhaps this
10% of stored gets 1 GET / day 1 % gets 10 GETs / day 0.1 % gets 100 GETs / day
1TB that is
GETs per day. With
20% filled, that is
0.2 * 300k = 60k
GETs per day.
This is about 38.2 safecoins.
The reason to give such a high reward initially is to maximize early growth of the network, which is key to ensure its secure. If safecoin is at
$10 at that time, it means even higher inflow of people starting vaults.
If we say that the first week will see a doubling of vaults every day (perhaps optimistic, but not entirely impossible), going from
640000 vaults. That gives, for a
1 TB vault at
Day Safecoins per day Total 1 38.2 38.2 2 24.6 62.8 3 15 77.8 4 9 86.8 5 5.1 91.9 6 2.88 94.78 7 1.62 96.4
(Now, in reality, these numbers would be affected by the decreasing u as coins are farmed, and the decreased d as new storage is added, but we simplify and assume constant d. It gives a rough picture.)
Even if a safecoin is at
$10 at the time of launch, it is reasonable I think, as it would give an absolutely insane initial growth of new nodes.
First week would then give
$964 for a
TB of storage provided. (In reality probably less).
This seems like a very good motivator for network growth.
World wide adoption
With 100 billion nodes, R =
1 nanosafe is reached when d =
0.7% and u =
(Expecting world wide adoption to be closer to 100 billion nodes than 10 billions, when considering IoT etc.)
When we are storing all data in the world, we probably have a much less intensive access pattern to our vault, let’s update it to:
1% of stored gets 1 GET / day 0.1 % gets 10 GETs / day 0.01 % gets 100 GETs / day
1TB that is
GETs per day. With
70% filled, that is
0.7 * 30k = 21k
GETs per day.
@Sotros25 made an estimation of some $4k per safecoin at absolute world dominance (link).
That would give
0.000000001 * 21000 * 4000 = $0.084 per day.
10 cent per day and
TB. Not really exciting.
But then on the other hand storage capacity might be much better, so a
TB then could be the equivalent of a
GB today. That would give
$84 per day and
These are mere speculations. But we are not many orders of magnitude off.
The question is how close we need to be, and what will compensate variations to this? Would it happen already with the way this works, or is something additional (or totally different) needed?
- If we let
nanosafebe the smallest unit, we would need to go either probabilistic, or accumulative on rewards when at
100 bnnodes. Apprx.
GETs would actually be paid out as
- Many speculations lead to the desired numbers. What compensatory effects will we see in real life? Will the economy of the farming work out with this, or is something additional/different needed?
- Can R be too small as the network grows? With a network size of 100 billion nodes with 50% unfarmed coin u remaining and 70% used data storage d we can still reward
GET. Would we introduce probabilistic rewarding whenever R <
- Could we simplify and remove node count, and only use section count? Could the decreasing of rewards as sections split encourage unwanted stalling of a split event, would it be too coarse grained to properly respond to network health?
- How should store cost relate to farming reward?