Thought this might fit here
The problem is that u128 is not well supported by compilers. It should be a set of four 32 bit unsigned integers…128 bit just like divisibility of ipv6.
Why not just go crazy and do 8 unsigned ints (insignificant to total user storage). This allows each wallet balance to work in divs or decs depending on the user’s preference. Bankers can do decimal, philosopher geeks get to have fun with binary. This also satisfies the original statements made at crowdsale that there would never be more than 2^32 SC and each would be divisible to at least 2^32.
If we do get rid of PUTs and use micro/nano safecoin deductions, i’d like to control my spending and still ring fence safcecoins as a PUT balance.
In this way, if my “PUT balance” still has safecoins, I could sell them.
Rust has official support for 128 bit ints, thus it is generating all the plumbing for you on architectures that do not support 128 bit ints. Just as it does with 64bit ints on systems that aren’t able to do 64 bit math.
edit: rusts support for different platforms: https://forge.rust-lang.org/platform-support.html
Nice. Rust! However, there might be future bugs in rusty plumbing. IMO it would be beneficial to have a low level dedicated/minimalist safecoin add and subtract library.
You mean for other langs? Or do you think, that you can make a 3. party 128bit int math package for rust, that has higher code quality that rust itself? For rust this should be already handled by the
MilliCoin, … types
In any case making a simple 128 bit add/subtract routine is easy.
No. 20 chars…
Some more ideas, both are similar but different way of conceptualising
Pressure Release Valve
The network operates smoothly under ‘normal’ variability but changes behaviour under stress conditions. I think this sudden ‘valve’ style change in network behaviour is not great but it does lead to consideration of what counts as dangerous stress. It also might encourage less sudden changes in client behaviour if clients know their changes might make their experience worse.
The network processes requests during busy times based on some queue mechanism, like bitcoin does with the transaction fee. In bitcoin the transaction fee is used to put a queue of transactions in some order or priority, and during times of stress when the queue is very long the network can still function predictably.
In SAFE maybe the queue is ordered by the type of event (vault join vs GET request vs vault relocate vs PUT request vs split event vs hop message relay etc) but there may be other factors that contribute to when and why events are processed. What could the network do when the queue gets very long?
This is my attempt at working out the longest/shortest time to issue all coins for RFC-0057.
Longest time will be when the StoreCost is smallest, thus reward is also smallest.
Smallest StoreCost is 0.005 (see this post for calcs/reasons)
New safecoins issued for each put will be 0.005. The total reward is 0.01 which consists of 0.005 existing coins paying for the PUT and 0.005 new coins for the farm reward.
So total reward events possible (ie puts) is 2^32 / 0.005
Using a suitable put rate we can see how long it takes to complete the total reward events.
A ‘low load’ might be simulated with the normal ongoing load. Average phone storage is 64 GB and average lifetime of 2 years; let’s say it gets full after 1 year… this means about 64 GB PUTs per year. (There’s a ‘heavy’ load used later in the shortest time calc).
Assume every put is exactly 1 MB.
This has a put rate of 180 puts/day/user (64*1024/365)
The number of users might be the number of users on this forum, which is about 9000 (from the about page of the forum).
So the total put rate is 1,615,956 puts/day (9000*(64*1024/365))
This gives us 531,569 days (2^32/0.005)/(9000*(64*1024/365)) or 1456 years to fully deplete the rewards.
Largest StoreCost is 0.51 which will result in the biggest reward and fastest rate of coin depletion.
Total puts possible is 2^32 / 0.51
initial load: average 100 GB per user in the first month (from this topic)
Assume 30 days in a month.
This gives a put rate of 3413 puts/day/user (100*1024/30)
Let’s keep 9000 users
So total put rate is 30,720,000 puts/day (9000*(100*1024/30))
And time to fully deplete all rewards is 274 days (2^32/0.51)/(9000*(100*1024/30))
Key parameters are:
Upload rate (maybe puts per user per day, maybe total puts per day, I don’t know the easiest unit here)
StoreCost (maybe derived from % full vaults)
These two parameters allow us to calculate the total time to issue all coins for RFC-0057.
For 9000 users uploading between 64 GB per year and 100 GB per month the time to deplete is between 274 days and 1456 years.
But let’s not forget that after all coins are rewarded farmers still get paid the PUT fees so farming always has some reward so long as PUTs are being done.
Which is why I’ve always been a fan of farming in gets. Even if uploads are slow coming, as long as people are active in the network (using your resources) you will be paid. I think it will produce better mid term results. Help new farmers weather any luls in uptake on the network. People always want to consume. They don’t always want to post. Or rather, many more consume than post.
I also prefer to farm in gets since it preserves the value of ‘the data is stored’ rather than ‘the data was stored’.
But I want to clarify I feel there are two issues being merged here: whether new coins can be fully depleted vs what actions lead to rewards.
My post was really trying to get at whether new coins can/should be fully depleted (like bitcoin) or whether new coins should be always available and never fully depleted (ethereum is never fully depleted but that’s because there’s no cap, anyone know of a coin that’s capped but also never fully depleted?)
I vote solidly in favor of capped but not fully depleted. Using a health modifier as @mav proposed before. I’ve tinkered with it some and this is my fav version:
The red line is mav’s original modifier, the blue is my proposed modifier. It gives a nice even rate of change as supply goes down. This will drive the price higher naturally with no bumps but supply will never run out even though capped. This is a linear graph so it appears to go to zero, but on a log scale graph you would see that it never reaches zero.
I don’t know of any - which would give Safecoin another unique quality over other coins - a plus for marketing and those who are looking for a non-inflationary coin.
From RFC : “We also update the terminology of both of these RFCs and substitute ClientManager, DataManager and CoinManger with Elder.”
Will Elders have different personas? A single elder operating as all three personas within the same section… Seems like this would be a security problem???
It would be better just to prolong the Mav’s one to zero. He have HM=1 at 50% which looks better.
Actually it could be linear straight from HM 2 at 0% to HM 0 at 100%.
Original Mav post
Mav’s approach is more linear, but that’s bad, not good. We want to limit the number of new coins minted and promote the value of each coin. Hence I believe a logarithmic approach is better.
The linear formula could be FR=(HM-2)/(-2) or HM=(-2)*FR+2.
It would be still very hard to farm last available SafeCoin.
I think maybe I went too quickly to the mathematics and the model is poor.
The intention was to take one step past the ‘pay double’ idea in rfc-0057. To me, that one extra step was ‘pay half sometimes’. The trouble begins in trying to decide ‘when to pay double or half’. (Tyler’s model is equivalent of saying ‘pay nothing sometimes’ since HM goes to 0 instead of 0.5, which I guess is more sensible when nearly all coins are issued)
Let’s take a step back and look further into the original rfc-0057 idea of paying double the PUT cost and eventually depleting all coins.
Rewards would still happen forever, but would suddenly be half when all coins are depleted. This is a bit of a shock but would probably not be the end of the network. Some preparation by farmers as the event neared would be enough to get through it. Maybe StoreCost would be unstable for a while as everyone comes to a new understanding.
There’s a few paths to consider from here.
One is ‘maybe StoreCost could become very small so rewards would also be very small and it would become very hard to deplete all coins’. I think there’s a lot of merit in pursuing this; it involves changing the StoreCost algorithm but let’s leave it at that for now.
Another path is ‘maybe the extra reward becomes less and less until at the very end it becomes nothing’, sort of like bitcoin. This is a pretty simple change and retains the idea of fully depleting. It has the benefit that the reward won’t stop as a shock but as a gradual shrinking.
Expanding on the ‘one extra step’ idea, still just doing one extra thing but going into more detail about it… where the one extra thing is allowing the network to take coins back for itself.
My assumption is a) the security of the network depends on whether it can manipulate behaviour to their own needs and b) the amount of manipulation the network can do is proportional to the amount of reward it can offer. Maybe that’s not true and rather than aim for the reward be very flexible maybe something else could be, like the StoreCost or maybe the fiat exchange rate. I’m totally open to challenges to this assumption.
These assumptions lead to the idea that if rewards are depleted the network has less ability to stay secure because it can’t be flexible in how much reward is offered during times of stress.
I think the broad idea I’m trying to get at with ‘one extra step’ is to explore whether it’d be ok to never fully deplete the rewards. Have a target for when ‘things are fine’ (maybe target 50% of coins issued) which leads to a reward buffer to use when ‘things are crazy’. The buffer may get as low as only 1 coin issued (so there are heaps of coins to farm) or the buffer may get as high as 2^32 coins issued (so farming should slow right down and consolidate to the most efficient nodes). But if the load stays steady for a while and ‘things are fine’, the network eventually comes back to some ideal amount of reward so the next storm of activity when ‘things are crazy’ can be managed with maximum force of rewarding.
I dunno. Maybe fully depleting is better? I’m not really convinced either way yet.
I think your intuition is good on this one. The network will always need a buffer of coins under its control to account for extreme events. What the optimum setpoint is at equilibrium is anyone’s guess depending on network objectives. The simple fact is that the network will always need to be the biggest whale in the pond so that no single human entity or colluding group could take majority share of the safecoin economics. If one considers extreme edge cases then the network goal will need to be greater than 51%. Transient spikes in activity could reduce the ratio but the networks goal would always be to have at least x% in reserve.
Excellent you bring up things that I was thinking about @mav and was about to post here:
Letting reward curve never hit 0 (but there’s another limit, I’ll get to that), and also that network recovers coins.
I was working this morning on a proposal. I worked it out on my phone with desmos app, so still crude:
Proposal for farming reward
d = data percent stored of available u = unfarmed coins s = sections count m = 1 / s n = neighbor sections' median node count x = 1 / n
R = xmu(d + u + 1)
C = 2R
The farming reward is paid on
PUT payments go directly to the section of the associated data. This is a recovery of farmed coins, into the unfarmed balance. (It is the first part of having the network keep a buffer for balancing dynamics).
This requires a simple and minimal algorithm, for rebalancing between sections. It can be done in many ways, but for example:
Whenever a section cannot afford to reward a GET, it will request of its N/3 neighbors of highest value unfarmed balance, to send 1/4 of their unfarmed balance (where N = neighbor count).
This would supposedly be a relatively rare operation, and have a low impact both at execution and on the general workload.
The farming reward
R = xmu(d + u + 1) can be tweaked in endless ways as to achieve the desired properties, this here is a first draft.
Same goes for
C = 2R, which was additionally just an arbitrary value. Most consideration went into devicing
Unfarmed coins percent u
The fewer there are left, the lower R should be.
Data stored percent d
The higher the percentage filled, the higher R should be.
The more sections there are, the more valuable the network (and thus its currency) is, and the lower R becomes.
Neighbour median node count x
Together with s gives network size, and the bigger the network the more valuable it (and thus its currency) is, and the lower R becomes.
The reward unit for a rewardable operation. Currently the only such is a
GET, and one
GET is rewarded with R safecoins.
Store cost C
PUT operation will return 2R to the network, and thus increase unfarmed coin u and data stored d.
This reward algo assumes that network size is an indicator of Safecoin value. When few machines are running, we assume there are little economic margin associated with it. Many machines running, would indicate that it is very attractive to run a machine, and thus we assume that it is lucrative, with high margins.
Additionally, we assume that a larger network is an indicator of increased adoption and breakthrough of the technology and evidence of its larger usefulness and value. We don’t need to know in what way, the sheer increase in number indicates that it is more useful in any number of ways, collectively making it more valuable to society, and thus its currency is more valuable.
This metric is essential to include in the farming algorithm as to properly balance the payout with regards to the inherent value of the currency, and also maintain the value of unfarmed coins.(This here is the other part of network keeping a buffer of value for various dynamics with regards to members and performance, instead of depleting and losing that influence).
Having a store cost use a constant multiplier of 2, is an arbitrary value for the moment. Much more consideration could be put into the consequences of this, as well as of any other constant or variable used instead.
The behavior of the algorithm
R is less responsive to the variation of d than that of u.
The influence of s is large at very low values, and small at very large values. This translates into a maximum of R = 3 safecoins when section count s = 1 and node count x = 1.
With 10 billion nodes, R = 1 nanosafe is reached when d = 0.1% and u = 0.1 %.
At same node count and d = 70% and u = 50%, R = 1100 nanosafes.
R = 11000 nanosafes with 100 million nodes and d = 70% and u = 50%.
- Can R be too small as the network grows? With a network size of 10 billion nodes with 0.1% unfarmed coin u remaining and 0.1% used data storage d (i.e. extreme conditions giving very low reward) we can still reward 1 nanosafe per
GET. Would we introduce probabilistic rewarding whenever R < 1 nanosafe?
- Could we simplify and remove node count, and only use section count? Could the decreasing of rewards as sections split encourage unwanted stalling of a split event, would it be too coarse grained to properly respond to network health?
- How should store cost relate to farming reward?
Thunder is rumbling and I need to disconnect the router, will have to refine this later.