RFC 57: Safecoin Revised

I think maybe I went too quickly to the mathematics and the model is poor.

The intention was to take one step past the ‘pay double’ idea in rfc-0057. To me, that one extra step was ‘pay half sometimes’. The trouble begins in trying to decide ‘when to pay double or half’. (Tyler’s model is equivalent of saying ‘pay nothing sometimes’ since HM goes to 0 instead of 0.5, which I guess is more sensible when nearly all coins are issued)

Let’s take a step back and look further into the original rfc-0057 idea of paying double the PUT cost and eventually depleting all coins.

Rewards would still happen forever, but would suddenly be half when all coins are depleted. This is a bit of a shock but would probably not be the end of the network. Some preparation by farmers as the event neared would be enough to get through it. Maybe StoreCost would be unstable for a while as everyone comes to a new understanding.

There’s a few paths to consider from here.

One is ‘maybe StoreCost could become very small so rewards would also be very small and it would become very hard to deplete all coins’. I think there’s a lot of merit in pursuing this; it involves changing the StoreCost algorithm but let’s leave it at that for now.

Another path is ‘maybe the extra reward becomes less and less until at the very end it becomes nothing’, sort of like bitcoin. This is a pretty simple change and retains the idea of fully depleting. It has the benefit that the reward won’t stop as a shock but as a gradual shrinking.

Expanding on the ‘one extra step’ idea, still just doing one extra thing but going into more detail about it… where the one extra thing is allowing the network to take coins back for itself.

My assumption is a) the security of the network depends on whether it can manipulate behaviour to their own needs and b) the amount of manipulation the network can do is proportional to the amount of reward it can offer. Maybe that’s not true and rather than aim for the reward be very flexible maybe something else could be, like the StoreCost or maybe the fiat exchange rate. I’m totally open to challenges to this assumption.

These assumptions lead to the idea that if rewards are depleted the network has less ability to stay secure because it can’t be flexible in how much reward is offered during times of stress.

I think the broad idea I’m trying to get at with ‘one extra step’ is to explore whether it’d be ok to never fully deplete the rewards. Have a target for when ‘things are fine’ (maybe target 50% of coins issued) which leads to a reward buffer to use when ‘things are crazy’. The buffer may get as low as only 1 coin issued (so there are heaps of coins to farm) or the buffer may get as high as 2^32 coins issued (so farming should slow right down and consolidate to the most efficient nodes). But if the load stays steady for a while and ‘things are fine’, the network eventually comes back to some ideal amount of reward so the next storm of activity when ‘things are crazy’ can be managed with maximum force of rewarding.

I dunno. Maybe fully depleting is better? I’m not really convinced either way yet.

10 Likes

I think your intuition is good on this one. The network will always need a buffer of coins under its control to account for extreme events. What the optimum setpoint is at equilibrium is anyone’s guess depending on network objectives. The simple fact is that the network will always need to be the biggest whale in the pond so that no single human entity or colluding group could take majority share of the safecoin economics. If one considers extreme edge cases then the network goal will need to be greater than 51%. Transient spikes in activity could reduce the ratio but the networks goal would always be to have at least x% in reserve.

4 Likes

Excellent :slight_smile: you bring up things that I was thinking about @mav and was about to post here:

Letting reward curve never hit 0 (but there’s another limit, I’ll get to that), and also that network recovers coins.

I was working this morning on a proposal. I worked it out on my phone with desmos app, so still crude:

Exploration of a live network farming reward

d = data percent stored of available
u = unfarmed coins
s = sections count
m = 1 / s
n = neighbor sections' median vault count [1, 200]
x = 1 / n

Farming reward

R = xmu(d + u + 1)

Store cost

C = 2R

General

The farming reward is paid on GETs.
The PUT payments go directly to the section of the associated data. This is a recovery of farmed coins, into the unfarmed balance. (It is the first part of having the network keep a buffer for balancing dynamics).
This requires a simple and minimal algorithm, for rebalancing between sections. It can be done in many ways, but for example:

Whenever a section cannot afford to reward a GET, it will request of its N/3 neighbors of highest value unfarmed balance, to send 1/4 of their unfarmed balance (where N = neighbor count).

This would supposedly be a relatively rare operation, and have a low impact both at execution and on the general workload.


The farming reward R = xmu(d + u + 1) can be tweaked in endless ways as to achieve the desired properties, this here is a first draft.
Same goes for C = 2R, which was additionally just an arbitrary value. Most consideration went into devicing R.

Breakdown

Unfarmed coins percent u

The fewer there are left, the lower R should be.

Data stored percent d

The higher the percentage filled, the higher R should be.

Sections s

The more sections there are, the more valuable the network (and thus its currency) is, and the lower R becomes. Due to rules of split and merge (bounded range for section size), number of sections is a good indicator of network size.

Neighbour median node count x

Approximates node count per section. R increases as approximated number of nodes in sections decrease.

Reward R

The reward unit for a rewardable operation. Currently the only such is a GET, and one GET is rewarded with R safecoins.

Store cost C

Every PUT operation will return 2R to the network, and thus increase unfarmed coin u and data stored d.

Motivation

Safecoin value

This reward algo assumes that network size is an indicator of Safecoin value. When few machines are running, we assume there are little economic margin associated with it. Many machines running, would indicate that it is very attractive to run a machine, and thus we assume that it is lucrative, with high margins.
Additionally, we assume that a larger network is an indicator of increased adoption and breakthrough of the technology and evidence of its larger usefulness and value. We don’t need to know in what way, the sheer increase in number indicates that it is more useful in any number of ways, collectively making it more valuable to society, and thus its currency is more valuable.

This metric is essential to include in the farming algorithm as to properly balance the payout with regards to the inherent value of the currency, and also maintain the value of unfarmed coins.(This here is the other part of network keeping a buffer of value for various dynamics with regards to members and performance, instead of depleting and losing that influence).

Median section node count

The variable x acts as stimulant of security; we can by this motivate inflow of new nodes when nodes for some reason are leaving the sections. When sections decrease in size (i.e. x decreases), their security also decreases, therefore it is motivated to inversely correlate the reward R with x, as to stimulate new nodes to join.

Store cost

Having a store cost use a constant multiplier of 2, is an arbitrary value for the moment. Much more consideration could be put into the consequences of this, as well as of any other constant or variable used instead.

Results

The behavior of the algorithm

R is less responsive to the variation of d than that of u.
The influence of s is large at very low values, and small at very large values. This translates into a maximum of R = 3 safecoins when section count s = 1, node count n = 1, d and u = 1 (node is full and no coin has been farmed).

Day of launch

Let’s see what vaults could perhaps expect to earn at day of launch with this function.
Initial coins ~10% gives u = 0.9
Number of vaults we expect to be roughly forum size which now is 5k.
That gives s = 50, as we set n = 100.
Since it’s early we set d = 0.2.
This gives R = 0.000378 (378 000 nanosafes)

With safecoin at 1$ and 1TB of storage filled to 20% with 1Mb chunks, and perhaps this GET distribution

10% of stored gets 1 GET / day
1 % gets 10 GETs / day
0.1 % gets 100 GETs / day

For 1TB that is 300k GETs per day. With 20% filled, that is 0.2 * 300k = 60k GETs per day.
This equals 22.68 safecoins.

Even if a safecoin is at $10 at the time of launch, it is reasonable, as it would give an absolutely insane growth of new nodes.

World wide adoption

With 10 billion nodes, R = 0.11 nanosafe is reached when d = 0.7% and u = 0.5 %.
(100 million nodes, d = 70% and u = 50% give R = 11 nanosafes.)

When we are storing all data in the world, we probably have a much less intensive access pattern to our vault, let’s update it to:

1% of stored gets 1 GET / day
0.1 % gets 10 GETs / day
0.01 % gets 100 GETs / day

For 1TB that is 30k GETs per day. With 70% filled, that is 0.7 * 30k = 21k GETs per day.

@Sotros25 made an estimation of some $4k per safecoin at absolute world dominance (link).
That would give 0.00000000011 * 21000 * 4000 = $0.00924 per day.

1 cent per day and TB. Not really exciting.

But then on the other hand storage capacity might be much better, so a TB then could be the equivalent of a GB today. That would give $10 per day and TB equivalent. These are mere speculations, and they show that there is a problem here.

Problems

  • If we let nanosafe be the smallest unit, we would need to go either probabilistic, or accumulative on rewards when at 10 bn nodes. Apprx. 1/10th of the GETs would actually be paid out as 1 nanosafe.
  • World wide adoption probably is more like 100 billion nodes, when considering IoT etc.

Thoughts

  • Can R be too small as the network grows? With a network size of 1 billion nodes with 0.1% unfarmed coin u remaining and 0.1% used data storage d (i.e. extreme conditions giving very low reward) we can still reward 1 nanosafe per GET. Would we introduce probabilistic rewarding whenever R < 1 nanosafe?
  • Could we simplify and remove node count, and only use section count? Could the decreasing of rewards as sections split encourage unwanted stalling of a split event, would it be too coarse grained to properly respond to network health?
  • How should store cost relate to farming reward?
  • The weight of d and u should be switched, so that we get d squared instead of u squared.

Thunder is rumbling and I need to disconnect the router, will have to refine this later…

4 Likes

Can we get this and exclude cheating nodes, or count only full not full ratio?

Why count nodes and sections separately, when there is no difference for network if you have 1000 sections and each with 50 nodes or 5000 sections and each with 10 nodes?

1 Like

That is not a topic I’ve delved into, but from what I have read in previous discussions, no one is saying it is impossible and no one has any clear idea how exactly it should be done.

If sacrificial chunks etc. is not viable, then perhaps something like this:
New vault has to synch A amount of data. This is supposed to already be happening in the network. Some penalty will be doled out if it cannot serve any of it at a later point. The key here is to fill the store entirely at synch time, as has been proposed elsewhere.

So, with this as background, the Elders could book keep percent stored by simply adding and subtracting the size of what ever goes in and is deleted. Over the entire network, this should be accurate enough, since any failure to actually provide the data supposedly stored, is penalised.

Clarified: if the current network logic is good enough to actually make sure the data exists, it is good enough for estimating percent used, by simply trusting that logic, and adding book keeping of changes.
If a section has 100 vaults, it would be 100 rows with 2 columns. Vault ID, and current percent full (and the full part is the minimum data it has to hold apart from the fillout duplication).
So, that book keeping does not take up much space, and is not many operations added while handling the requests.

It doesn’t matter there, just there for visibility, and first iteration had only sections. But these variables could be used independently as well (giving different result than a single variable of total node count). So have not consolidated them. I also ponder there if it could be done with sections only and have good enough outcome.

2 Likes

There is difference (but not sure if it has economic consequence).

More sections means more hops so more overall work

More sections means more elders so better distribution of consensus and workload (assuming elders is constant and not proportional)

More sections means less coins reside in each section so is safer from attack and less need to expand

More sections means more total age (more events to age from) which affects security and reward distribution

There is a difference between total nodes and total sections. I think in reality this won’t be significant due to the rules for splitting (all sections will probably have pretty equal number of vaults). So I think it’s worth retaining the distinction for the purposes of reasoning, but the eventual consequence will be hard to notice.

Seems very nice. I like the intuition behind how it moves as each parameter changes.

Good to also account for splitting the reward among several vaults and weighting reward by age. Both make the reward even smaller. I think probabilistic rewarding is a good idea.

It would be fun to model how many nodes would be possible. I’m always amazed at how big the bitcoin hashrate got. If you’d asked anyone from 2010 to predict the hashrate in 2020 I doubt they’d have predicted such rapid growth. So yeah, I think it’s worth testing the best/worst in these models. For example, what if 90% of all hard drive manufacturing ended up being used for SAFE? What if hard drive manufacturing increases tenfold to account for the new demand? That sort of thing…

4 Likes

OK, so after thinking some more, looking at the extremes, I tried out some of changes

  • Heavier weight for d instead of u
  • Make R result in at least 1 nanosafe at 100 billion nodes, d = 0.7 and u = 0.5

It seems likely that u would stabilize around some value.

It might be the case that it is a reason to give u higher weight, so that the smaller fluctuations, can still be reflected in the farming rewards (and storage costs).
Another way to look at it would be that since d would probably at times see larger deviations from any stable point than u does, it is an influence on the network that need a larger compensatory capacity, hence a greater weight to d.

R = xmu(d + u + 1) can be changed to R = xmd(d + u + 1)

Additionally, to raise the node count where R = 1 nanosafe, I added ln( ( ds + 1 ) ^1.3 ), or
p = ds + 1
g = ln(p^1.3)

and

R = gxmd(d + u + 1)


I think that what we need to do is make the network be able to easily adjust R and C as outside markets decide on a fiat value of safecoin. I think that increasing the weight of d improves the responsiveness in this regard, which should make it easier for the network to adapt to such market valuation changes, as it then can better incentivise people to add / remove storage to the network.

I’ll continue exploring this specific path.

5 Likes

@anon86652309, there is one question that occurred to me and others and that is concerning the smallest unit of safecoin that can be sent in one transaction.

So while the smallest unit is 10^-9 or 10^-27 depending on 64 or 128 bits, is there a dust level being implemented.

That is is a request is sent to transfer less than the dust level then the request is rejected. The reason for doing this is to limit the number of transactions an attacker can instigate in a set period of time.

Maybe this dust level can be dynamically set by the number of sections in the network (just look at section code/number to know)

So on a small network the dust could be 10^-5 and at 10 times that it is 10^-6 etc

And of course the amount being set can still be specified down to the minimum unit (eg 0.123456789 or 0.123456789012345678901234567), just cannot be below the dust amount.


Another is if 128 bits is used now to save any future upgrades to it and the code, then you could just use 96 bits of that and have practically infinite division.

3 Likes

Good thinking, that would be nice and easy solution for the dust trasnactions problem.

1 Like

Isn’t there a queue of sorts for the number of transactions in flight from any source address. Perhaps this could be capped at something low enough to help prevent abusing the network’s free transaction cost? IIRC, this approach was discussed for dealing with emails/messages spam.

2 Likes

So then they send to each account being used in the attack 0.000000100 safe and then each account sends the 100 nanos to 100 coin balances then each computer now has 100 coin balances and then sends 100 transfers to another set of 100 coinbalances and then to yet another 100 coinbalances and so on.

Now if there are 1000 computers in this attack there only needs to be 0.000100000 safe to to the attack. Even 1 million computers would only need 0.1 safe, yea I know 1 million isn’t happening but the point is that it requires very little safe in order to do the attack. And if 10,000 computer do this with 10000 x 1 nano coinbalnce sets then its almost equivalent to the 1 million with 100 x 1 nano coinbalance sets.

1 Like

If we think in terms of binary growth (analogous for decimal), with the 2^32 safecoin representing the dust level size of the network at launch, we get a sense for practical dust level divisibility. So let us consider the following scheme: each time the network size doubles, an extra bit of divisibility is accessible.

How much larger will the network (nodes, processing power, bandwidth) be in 100 years?

Can we assume a doubling every two years on average?

It would seem that 96bit Safe Coin accounts (32 bit units, 64 bit parts) would be good for 100 years, whereas 128 bit are needed
after 200 years.

Flawed logic? I know it’s impossible to predict the future.

This graphic from singularity.com shows a doubling of processor power very 1.2 years.

So you are saying that to get to nano we need 30 doubling - that would be 1 billion SECTIONS

Also the capabilities would increase faster than the doubling rate. Any attack is going to be a smaller %age of the network as the network grows.

Also the effect of any attack is not proportional to the size of the network since any attack is not increasing at the same %age as the network increase. 10 times the size of the network may only see a doubling of the attack. Thus the capabilities of the network increase greater then the section increase rate is.

If you were to do your double #sections and half the dust size then you’d need to start at dust being much smaller than 1 safecoin. I suggest at least 0.01 no matter what, but that still requires the network to be 10 million (1 billion sections) the size to get to 1 nano dust size, which it likely needs to be once the network is complete.

Really need dust to be 0.0001 to allow only a 100,000 increase in number of sections. (ie 10 million sections to get 1 nano)

NOTE: Its not processor speed increase but transaction rate which involves a lot more than processor speed since you cannot change the speed of light. Lag between computers in the section limits consensus speeds.

No, not necessarily. I was thinking in general terms of computational capacity. Harder to pin down than simple node count, but the resource proofs should give a sense of the safe equivalent to flops on a supercomputer. As you mentioned, the global or sectional “transaction rate” might be a good indicator. That should increase with both tech improvements and network adoption.

1 Like

As I said its not computational ability, you only need to reach a certain level. The real limiting factor is transaction rate. Once the cpu is fast enough (eg 64 bit arm) then the rate is dependent on the consensus rates which is dependent on communications which has lag time as the largest component.

Thus speed of light and number of sections are what will be the major factors in the transaction rate. To increase transaction rate then you need more sections. NOT faster CPUs

1 Like

Ok, so plenty of sections. The above was just an example. Pick your own initial dust level of divisibility and let it grow from there.

1 Like

You said double size half the dust

I said 10 times 1/10 dust size.

Same thing basically and expresses the same principle

EDIT: Were you talking of the dust size or actual amount of division?

If you were talking of division then I disagree as having a dust level solves any problems with any amount of division. And allows farming rewards to be sent with a privileged state of no dust limit

The point of the exercise was to get a sense for the longevity of a given bit depth of divisibility.

Not sure why you need to be concerned with this. Just have the max divisibility and apply a minimum transaction amount. (is dust level)

If dust level is 0.0001 then you can send 0.0001 and send 0.00012345 but not 0.000099999

Sorry, dust size. A while back I was pushing for the “quasi-infinite” divisibility of 128bit SC… can’t believe you are considering it. Wanted to recheck some premises…