RFC 57: Safecoin Revised

Rust has official support for 128 bit ints, thus it is generating all the plumbing for you on architectures that do not support 128 bit ints. Just as it does with 64bit ints on systems that aren’t able to do 64 bit math.

edit: rusts support for different platforms: https://forge.rust-lang.org/platform-support.html

9 Likes

Nice. Rust! However, there might be future bugs in rusty plumbing. IMO it would be beneficial to have a low level dedicated/minimalist safecoin add and subtract library.

1 Like

You mean for other langs? Or do you think, that you can make a 3. party 128bit int math package for rust, that has higher code quality that rust itself? For rust this should be already handled by the NanoCoin, MilliCoin, … types

1 Like

In any case making a simple 128 bit add/subtract routine is easy.

2 Likes

No. 20 chars…

3 Likes

Some more ideas, both are similar but different way of conceptualising

Pressure Release Valve

The network operates smoothly under ‘normal’ variability but changes behaviour under stress conditions. I think this sudden ‘valve’ style change in network behaviour is not great but it does lead to consideration of what counts as dangerous stress. It also might encourage less sudden changes in client behaviour if clients know their changes might make their experience worse.

Queue

The network processes requests during busy times based on some queue mechanism, like bitcoin does with the transaction fee. In bitcoin the transaction fee is used to put a queue of transactions in some order or priority, and during times of stress when the queue is very long the network can still function predictably.

In SAFE maybe the queue is ordered by the type of event (vault join vs GET request vs vault relocate vs PUT request vs split event vs hop message relay etc) but there may be other factors that contribute to when and why events are processed. What could the network do when the queue gets very long?

6 Likes

This is my attempt at working out the longest/shortest time to issue all coins for RFC-0057.

Longest time will be when the StoreCost is smallest, thus reward is also smallest.

Smallest StoreCost is 0.005 (see this post for calcs/reasons)

New safecoins issued for each put will be 0.005. The total reward is 0.01 which consists of 0.005 existing coins paying for the PUT and 0.005 new coins for the farm reward.

So total reward events possible (ie puts) is 2^32 / 0.005

Using a suitable put rate we can see how long it takes to complete the total reward events.

A ‘low load’ might be simulated with the normal ongoing load. Average phone storage is 64 GB and average lifetime of 2 years; let’s say it gets full after 1 year… this means about 64 GB PUTs per year. (There’s a ‘heavy’ load used later in the shortest time calc).

Assume every put is exactly 1 MB.

This has a put rate of 180 puts/day/user (64*1024/365)

The number of users might be the number of users on this forum, which is about 9000 (from the about page of the forum).

So the total put rate is 1,615,956 puts/day (9000*(64*1024/365))

This gives us 531,569 days (2^32/0.005)/(9000*(64*1024/365)) or 1456 years to fully deplete the rewards.

Shortest time

Largest StoreCost is 0.51 which will result in the biggest reward and fastest rate of coin depletion.

Total puts possible is 2^32 / 0.51

initial load: average 100 GB per user in the first month (from this topic)

Assume 30 days in a month.

This gives a put rate of 3413 puts/day/user (100*1024/30)

Let’s keep 9000 users

So total put rate is 30,720,000 puts/day (9000*(100*1024/30))

And time to fully deplete all rewards is 274 days (2^32/0.51)/(9000*(100*1024/30))

Summary

Key parameters are:

Upload rate (maybe puts per user per day, maybe total puts per day, I don’t know the easiest unit here)

StoreCost (maybe derived from % full vaults)

These two parameters allow us to calculate the total time to issue all coins for RFC-0057.

For 9000 users uploading between 64 GB per year and 100 GB per month the time to deplete is between 274 days and 1456 years.

But let’s not forget that after all coins are rewarded farmers still get paid the PUT fees so farming always has some reward so long as PUTs are being done.

Any thoughts?

2 Likes

Which is why I’ve always been a fan of farming in gets. Even if uploads are slow coming, as long as people are active in the network (using your resources) you will be paid. I think it will produce better mid term results. Help new farmers weather any luls in uptake on the network. People always want to consume. They don’t always want to post. Or rather, many more consume than post.

2 Likes

I also prefer to farm in gets since it preserves the value of ‘the data is stored’ rather than ‘the data was stored’.

But I want to clarify I feel there are two issues being merged here: whether new coins can be fully depleted vs what actions lead to rewards.

My post was really trying to get at whether new coins can/should be fully depleted (like bitcoin) or whether new coins should be always available and never fully depleted (ethereum is never fully depleted but that’s because there’s no cap, anyone know of a coin that’s capped but also never fully depleted?)

5 Likes

I vote solidly in favor of capped but not fully depleted. Using a health modifier as @mav proposed before. I’ve tinkered with it some and this is my fav version:

2(1-FC)^e

The red line is mav’s original modifier, the blue is my proposed modifier. It gives a nice even rate of change as supply goes down. This will drive the price higher naturally with no bumps but supply will never run out even though capped. This is a linear graph so it appears to go to zero, but on a log scale graph you would see that it never reaches zero.

I don’t know of any - which would give Safecoin another unique quality over other coins - a plus for marketing and those who are looking for a non-inflationary coin.

6 Likes

From RFC : “We also update the terminology of both of these RFCs and substitute ClientManager, DataManager and CoinManger with Elder.”

Will Elders have different personas? A single elder operating as all three personas within the same section… Seems like this would be a security problem???

1 Like

It would be better just to prolong the Mav’s one to zero. He have HM=1 at 50% which looks better.
Actually it could be linear straight from HM 2 at 0% to HM 0 at 100%.
Original Mav post

2 Likes

Mav’s approach is more linear, but that’s bad, not good. We want to limit the number of new coins minted and promote the value of each coin. Hence I believe a logarithmic approach is better.

4 Likes

The linear formula could be FC=(HM-2)/(-2) or HM=(-2)*FC+2.
It would be still very hard to farm last available SafeCoin.

1 Like

I think maybe I went too quickly to the mathematics and the model is poor.

The intention was to take one step past the ‘pay double’ idea in rfc-0057. To me, that one extra step was ‘pay half sometimes’. The trouble begins in trying to decide ‘when to pay double or half’. (Tyler’s model is equivalent of saying ‘pay nothing sometimes’ since HM goes to 0 instead of 0.5, which I guess is more sensible when nearly all coins are issued)

Let’s take a step back and look further into the original rfc-0057 idea of paying double the PUT cost and eventually depleting all coins.

Rewards would still happen forever, but would suddenly be half when all coins are depleted. This is a bit of a shock but would probably not be the end of the network. Some preparation by farmers as the event neared would be enough to get through it. Maybe StoreCost would be unstable for a while as everyone comes to a new understanding.

There’s a few paths to consider from here.

One is ‘maybe StoreCost could become very small so rewards would also be very small and it would become very hard to deplete all coins’. I think there’s a lot of merit in pursuing this; it involves changing the StoreCost algorithm but let’s leave it at that for now.

Another path is ‘maybe the extra reward becomes less and less until at the very end it becomes nothing’, sort of like bitcoin. This is a pretty simple change and retains the idea of fully depleting. It has the benefit that the reward won’t stop as a shock but as a gradual shrinking.

Expanding on the ‘one extra step’ idea, still just doing one extra thing but going into more detail about it… where the one extra thing is allowing the network to take coins back for itself.

My assumption is a) the security of the network depends on whether it can manipulate behaviour to their own needs and b) the amount of manipulation the network can do is proportional to the amount of reward it can offer. Maybe that’s not true and rather than aim for the reward be very flexible maybe something else could be, like the StoreCost or maybe the fiat exchange rate. I’m totally open to challenges to this assumption.

These assumptions lead to the idea that if rewards are depleted the network has less ability to stay secure because it can’t be flexible in how much reward is offered during times of stress.

I think the broad idea I’m trying to get at with ‘one extra step’ is to explore whether it’d be ok to never fully deplete the rewards. Have a target for when ‘things are fine’ (maybe target 50% of coins issued) which leads to a reward buffer to use when ‘things are crazy’. The buffer may get as low as only 1 coin issued (so there are heaps of coins to farm) or the buffer may get as high as 2^32 coins issued (so farming should slow right down and consolidate to the most efficient nodes). But if the load stays steady for a while and ‘things are fine’, the network eventually comes back to some ideal amount of reward so the next storm of activity when ‘things are crazy’ can be managed with maximum force of rewarding.

I dunno. Maybe fully depleting is better? I’m not really convinced either way yet.

10 Likes

I think your intuition is good on this one. The network will always need a buffer of coins under its control to account for extreme events. What the optimum setpoint is at equilibrium is anyone’s guess depending on network objectives. The simple fact is that the network will always need to be the biggest whale in the pond so that no single human entity or colluding group could take majority share of the safecoin economics. If one considers extreme edge cases then the network goal will need to be greater than 51%. Transient spikes in activity could reduce the ratio but the networks goal would always be to have at least x% in reserve.

4 Likes

Excellent :slight_smile: you bring up things that I was thinking about @mav and was about to post here:

Letting reward curve never hit 0 (but there’s another limit, I’ll get to that), and also that network recovers coins.

I was working this morning on a proposal. I worked it out on my phone with desmos app, so still crude:

Exploration of a live network farming reward

d = data percent stored of available
u = unfarmed coins
s = sections count
m = 1 / s
n = neighbor sections' median vault count [1, 200]
x = 1 / n

Farming reward

R = xmu(d + u + 1)

Store cost

C = 2R

General

The farming reward is paid on GETs.
The PUT payments go directly to the section of the associated data. This is a recovery of farmed coins, into the unfarmed balance. (It is the first part of having the network keep a buffer for balancing dynamics).
This requires a simple and minimal algorithm, for rebalancing between sections. It can be done in many ways, but for example:

Whenever a section cannot afford to reward a GET, it will request of its N/3 neighbors of highest value unfarmed balance, to send 1/4 of their unfarmed balance (where N = neighbor count).

This would supposedly be a relatively rare operation, and have a low impact both at execution and on the general workload.


The farming reward R = xmu(d + u + 1) can be tweaked in endless ways as to achieve the desired properties, this here is a first draft.
Same goes for C = 2R, which was additionally just an arbitrary value. Most consideration went into devicing R.

Breakdown

Unfarmed coins percent u

The fewer there are left, the lower R should be.

Data stored percent d

The higher the percentage filled, the higher R should be.

Sections s

The more sections there are, the more valuable the network (and thus its currency) is, and the lower R becomes. Due to rules of split and merge (bounded range for section size), number of sections is a good indicator of network size.

Neighbour median node count x

Approximates node count per section. R increases as approximated number of nodes in sections decrease.

Reward R

The reward unit for a rewardable operation. Currently the only such is a GET, and one GET is rewarded with R safecoins.

Store cost C

Every PUT operation will return 2R to the network, and thus increase unfarmed coin u and data stored d.

Motivation

Safecoin value

This reward algo assumes that network size is an indicator of Safecoin value. When few machines are running, we assume there are little economic margin associated with it. Many machines running, would indicate that it is very attractive to run a machine, and thus we assume that it is lucrative, with high margins.
Additionally, we assume that a larger network is an indicator of increased adoption and breakthrough of the technology and evidence of its larger usefulness and value. We don’t need to know in what way, the sheer increase in number indicates that it is more useful in any number of ways, collectively making it more valuable to society, and thus its currency is more valuable.

This metric is essential to include in the farming algorithm as to properly balance the payout with regards to the inherent value of the currency, and also maintain the value of unfarmed coins.(This here is the other part of network keeping a buffer of value for various dynamics with regards to members and performance, instead of depleting and losing that influence).

Median section node count

The variable x acts as stimulant of security; we can by this motivate inflow of new nodes when nodes for some reason are leaving the sections. When sections decrease in size (i.e. x decreases), their security also decreases, therefore it is motivated to inversely correlate the reward R with x, as to stimulate new nodes to join.

Store cost

Having a store cost use a constant multiplier of 2, is an arbitrary value for the moment. Much more consideration could be put into the consequences of this, as well as of any other constant or variable used instead.

Results

The behavior of the algorithm

R is less responsive to the variation of d than that of u.
The influence of s is large at very low values, and small at very large values. This translates into a maximum of R = 3 safecoins when section count s = 1, node count n = 1, d and u = 1 (node is full and no coin has been farmed).

Day of launch

Let’s see what vaults could perhaps expect to earn at day of launch with this function.
Initial coins ~10% gives u = 0.9
Number of vaults we expect to be roughly forum size which now is 5k.
That gives s = 50, as we set n = 100.
Since it’s early we set d = 0.2.
This gives R = 0.000378 (378 000 nanosafes)

With safecoin at 1$ and 1TB of storage filled to 20% with 1Mb chunks, and perhaps this GET distribution

10% of stored gets 1 GET / day
1 % gets 10 GETs / day
0.1 % gets 100 GETs / day

For 1TB that is 300k GETs per day. With 20% filled, that is 0.2 * 300k = 60k GETs per day.
This equals 22.68 safecoins.

Even if a safecoin is at $10 at the time of launch, it is reasonable, as it would give an absolutely insane growth of new nodes.

World wide adoption

With 10 billion nodes, R = 0.11 nanosafe is reached when d = 0.7% and u = 0.5 %.
(100 million nodes, d = 70% and u = 50% give R = 11 nanosafes.)

When we are storing all data in the world, we probably have a much less intensive access pattern to our vault, let’s update it to:

1% of stored gets 1 GET / day
0.1 % gets 10 GETs / day
0.01 % gets 100 GETs / day

For 1TB that is 30k GETs per day. With 70% filled, that is 0.7 * 30k = 21k GETs per day.

@Sotros25 made an estimation of some $4k per safecoin at absolute world dominance (link).
That would give 0.00000000011 * 21000 * 4000 = $0.00924 per day.

1 cent per day and TB. Not really exciting.

But then on the other hand storage capacity might be much better, so a TB then could be the equivalent of a GB today. That would give $10 per day and TB equivalent. These are mere speculations, and they show that there is a problem here.

Problems

  • If we let nanosafe be the smallest unit, we would need to go either probabilistic, or accumulative on rewards when at 10 bn nodes. Apprx. 1/10th of the GETs would actually be paid out as 1 nanosafe.
  • World wide adoption probably is more like 100 billion nodes, when considering IoT etc.

Thoughts

  • Can R be too small as the network grows? With a network size of 1 billion nodes with 0.1% unfarmed coin u remaining and 0.1% used data storage d (i.e. extreme conditions giving very low reward) we can still reward 1 nanosafe per GET. Would we introduce probabilistic rewarding whenever R < 1 nanosafe?
  • Could we simplify and remove node count, and only use section count? Could the decreasing of rewards as sections split encourage unwanted stalling of a split event, would it be too coarse grained to properly respond to network health?
  • How should store cost relate to farming reward?
  • The weight of d and u should be switched, so that we get d squared instead of u squared.

Thunder is rumbling and I need to disconnect the router, will have to refine this later…

4 Likes

Can we get this and exclude cheating nodes, or count only full not full ratio?

Why count nodes and sections separately, when there is no difference for network if you have 1000 sections and each with 50 nodes or 5000 sections and each with 10 nodes?

1 Like

That is not a topic I’ve delved into, but from what I have read in previous discussions, no one is saying it is impossible and no one has any clear idea how exactly it should be done.

If sacrificial chunks etc. is not viable, then perhaps something like this:
New vault has to synch A amount of data. This is supposed to already be happening in the network. Some penalty will be doled out if it cannot serve any of it at a later point. The key here is to fill the store entirely at synch time, as has been proposed elsewhere.

So, with this as background, the Elders could book keep percent stored by simply adding and subtracting the size of what ever goes in and is deleted. Over the entire network, this should be accurate enough, since any failure to actually provide the data supposedly stored, is penalised.

Clarified: if the current network logic is good enough to actually make sure the data exists, it is good enough for estimating percent used, by simply trusting that logic, and adding book keeping of changes.
If a section has 100 vaults, it would be 100 rows with 2 columns. Vault ID, and current percent full (and the full part is the minimum data it has to hold apart from the fillout duplication).
So, that book keeping does not take up much space, and is not many operations added while handling the requests.

It doesn’t matter there, just there for visibility, and first iteration had only sections. But these variables could be used independently as well (giving different result than a single variable of total node count). So have not consolidated them. I also ponder there if it could be done with sections only and have good enough outcome.

2 Likes

There is difference (but not sure if it has economic consequence).

More sections means more hops so more overall work

More sections means more elders so better distribution of consensus and workload (assuming elders is constant and not proportional)

More sections means less coins reside in each section so is safer from attack and less need to expand

More sections means more total age (more events to age from) which affects security and reward distribution

There is a difference between total nodes and total sections. I think in reality this won’t be significant due to the rules for splitting (all sections will probably have pretty equal number of vaults). So I think it’s worth retaining the distinction for the purposes of reasoning, but the eventual consequence will be hard to notice.

Seems very nice. I like the intuition behind how it moves as each parameter changes.

Good to also account for splitting the reward among several vaults and weighting reward by age. Both make the reward even smaller. I think probabilistic rewarding is a good idea.

It would be fun to model how many nodes would be possible. I’m always amazed at how big the bitcoin hashrate got. If you’d asked anyone from 2010 to predict the hashrate in 2020 I doubt they’d have predicted such rapid growth. So yeah, I think it’s worth testing the best/worst in these models. For example, what if 90% of all hard drive manufacturing ended up being used for SAFE? What if hard drive manufacturing increases tenfold to account for the new demand? That sort of thing…

4 Likes