Potential misconceptions with the safecoin economy

I am thinking that there will be an economy of people building vaults into SBCs and selling them for a reasonable price. Easier to setup than any router for the home user and can be farming within minutes as long as the person already has an account, and take 10 minutes if they need to setup an account.

This solution minimises electricity to like less than 10 watts, more like 5 watts. 2 models - Mag drive or SSD with the mag drive the cheaper option and suffers 5 to 10% slower overall in the round trip.

This will likely be cheaper for the home user who only has their PC on for limited time (say a few hours) each day. Some have their PCs on for 16 to 24 hours per day and the extra time to make up 24 hours is possibly cheaper than the couple hundred for a fast SBC unit.

I still believe we need to be rewarding all who have successfully held the requested chunk and supplies the signature in reasonable time, because they all have in good faith done what is needed and are benefiting the network by doing that

This also aids in reducing the benefits of expensive hardware/datacentre just to gain rewards using that slight advantage. And in the long run that would drive up the actual costs to store data due to “arms race” if more powerful equipment has significant benefit


Just popping in … may not read replies, so take this for what it’s worth.

Aside from crypto-currencies which are self-regulating through code that everyone can see, the general marketplace has many hidden politicized factors. E.g. Jeff Bezos goes to washington to complain that he’s not being given a fair shake from mom and pop on main street, so write a law to make it fair for me and by the way here’s some money for your re-election campaign.

We all sense that existing politicized markets are not ‘fair’, because we know that people use power and influence to bias things in their favor. But with crypto and also with the Safe Network, we have proof through code. As such I think we should not worry about centralization. As we know, centralization can bring efficiencies of scale - which is good for the network. So long as the network can respond to bad actors, then that should be fine as even the little guy is going to benefit from it. Too often the word ‘fair’ is abused to represent one’s own subjective viewpoint. The network needs to be neutral to be successful and incorporating subjective viewpoints on fairness is troubling.

On another point, in dealing with storage of low-popularity data, I wonder if the network could ‘bill’ a farmer for not having what they should have. e.g. a request for a chunk is rejected by a farmer who should have it, so instead of paying a reward, a bill is issued instead. So carrot and stick, not merely carrot. The bill would likely come off future rewards since ripping it out of their account should be impossible. Taken a step further if there is some metric of popularity (I assume there is as the network increases copies for popularity?) then adjust the reward/bill appropriately. Or just use reward mechanism, but adjust reward for popularity - more popular less reward.

Thanks for reading and thanks for the discussion. Most interesting.


May not be particularly useful or insightful but this reminded me of (idealised) competitive ideal free foraging that I read about eons ago in pigeons and cooperative foraging more generally. In one study on Starlings dominant individuals (In our case well bandwidth resourced farmers for lack of a better name) were less able to defend their competitive advantage against groups of subordinates over a certain number. So along these lines in our case would perhaps be similar to a bittorrent-like sub-chunk swarm delivery by bandwidth reduced farmers being able to challenge much better bandwidth resourced farmers for a stable competitive system.


Small miners often (usually) mine as speculation because they believe the price will go up 10x, 100x, or even 1000x plus in the future. In my experience, much (if not most) smalltime mining is actually unprofitable or only marginally profitable based on current prices. Many people have mined altcoins from day one, even when there is no existing market, and only a dim hope for one in the future. Some do so ideologically, others purely as speculation, some a mix. And this is a rational thing to do, so long as there is an expectation the coin value will likely increase in the future. A small loss/investment today can lead to relatively huge gains later on. So I think that a simplistic cost/benefit analysis based on current prices misses a big piece of the picture with respect to the long tail of contributors and their motivations/behavior.


I agree with this. Considering the utility of the network it seems like users will farm for protecting network data (continued access to the data is their main reward, farming protects that access and safecoin is a side effect).

For example archive.o​rg had 11.5M USD contributions (see p9) in 2018, at average $45 per contribution that’s about 250K donors. I know giving cash is not the same as starting a vault, but this gives me reason to think there’s a lot of people who will care about the non-farming aspects of running a vault.


How about a new request type that only vaults can make (ie not clients): starting at <xorname> return the first N chunk names you have stored

This would allow vaults to prefill their cache (by requesting chunks by name like a normal client would) and also allow some degree of auditing since multiple vaults would overlap the name range and should all respond the same way.

It’d be cool to be able to do audits without impacting the workload of elders.


Some interesting info about the Moore’s law of bandwidth and the potential gains of ASIC / custom hardware for farming:

Quoting from this conversation about ultra-dense optical data transmission

“Depending on the time period under study, bandwidth increases at between 20% and 100% per year. … the large bulk of improvement over time is due to breakthroughs in manufacturing, materials science, semiconductor optics, and signal processing.”

The source is Figure 3 in this 2017 ieee paper (the whole paper is really fascinating, especially when taken from a SAFE network perspective) IEEE Xplore Full-Text PDF:

Quoting from the paper itself

“it is not unreasonable to assume that the DSP underlying a 10-Tb/s interface will be able to fit within a single (or at most within a small number of) CMOS ASICs by 2024.”

The paper talks a lot about historical and expected increase in demand as well as how it affects supply of bandwidth. “network traffic has often been driven by totally unanticipated disruptions”

Seems like the gains from ASIC are not as significant as with bitcoin, but there are several avenues to explore. The paper talks about “Optical Transport Systems in 2024” in Section III, the list of potential areas to explore are

  • Scaling System Capacity Through Improved Fiber
  • Scaling Interfaces and Capacity by Advanced Modulation
  • Scaling System Capacity Through Wavelength Parallelism
  • Scaling System Capacity in Both Wavelength and Space
  • Spectral Superchannels
  • Spatial Superchannels
  • Transponder Integration

I wonder if the gains will be small enough that by ‘fuzzing’ the fastest response calculation, slower vaults can get kept in the picture without significantly affecting performance.


Still cannot see why rewarding all the vaults that have successfully kept the chunk and can verify it in reasonable time cannot be rewarded.

It solves

  • the need for “fuzzyness” in timing.
  • the incentive for gaming the speed is reduced a lot.
    • people with reasonable home computers will get rewarded which reduces the desire to pimp out their PC just to compete
    • the cost benefit for pimping out their system and connection is a lot less since the returns for the fastest system/connection over a decent home system is not as great. It would be expected that pimping out will actually mean they earn less profit (and ROI) than the home computer using spare resources.

The reasons to reward all who successfully store and retrieve (sending signature) in reasonable time is

  • The network needs multiple copies
  • The network does not care if the one machine is always the fastest for a particular chunk
  • The network needs the other copies for when the fastest fails or is offline (IE data security)
  • world wide latency actually helps the home computer since its unlikely that pimped out machines would be holding every copy of the chunk, especially since being pimped out is not as great a benefit.

The most important point is that the chunk needs to be stored on multiple machines so it is only fair that all machines that does that successfully be rewarded since all were needed to do that.

My 2 cents again and in my opinion solves a number of the issues with changing technologies (bandwidth + PC/storage) and reduces the benefits of pimping out machines which is limited to the richer group of society (rich become richer).

The distribution of the rewards is the important thing to get right. For instance

  • equal reward almost ruins any incentive for people to improve their machines/connection.
  • rewarding only the faster means that there will be a move towards only fast machines on fast connections (centralisation in areas of high speeds) as those who spend the money increasingly are the only ones being rewarded.
  • the right mix means that
    • home users will get rewarded
    • the rich will find it more difficult to break even if using pimped out machines/connections. They were always fighting an uphill battle to compete with home users. And home users getting rewards for doing the right thing means the vaults will be mostly in the hands of the home user.

The reason is that we do want to have pressure/reward/incentive towards better performance.


That makes sense to me. You want competitiveness between vaults to let the network have top performance, but you don’t want centralisation created by ‘super vaults’ (in size or performance). Can a Vault-time-out be an option, where a vault that has successfully delivered x chunks within y seconds gets a time-out ‘penalty’ or pause for a few seconds? That would also affect network speed a bit, but make the Vault system fair for all users.


Or possibly expand this on a larger timescale…

Your home PC vault has been chuntering away for months 24/7.You have served at least n chunks per day for m months. Your hard-earned vacation is coming up soon. You earn a 2-week break with no loss of status. Presumably with some extra security checking on re-start…


Its all in the ratio of the reward for fastest compared to the rest. This provides incentive for people to improve their home computers and if not too high then discourage the race to the most pimped out machines/connections (thus centralisation)


Sorry @neo, didn’t have time to read your post closely so I missed that.


Just a quick note here that if we were to store SafeCoin amounts as arbitrary precision rational number (store numerator and denominator) then we could operate with exact amounts when dividing up farming rewards (eg 1/3 instead of 0.3333333333333), and would not bump into division limitations of fixed decimal point if paying out small reward to eg billions of contributors. So there is no “tiny remainder” like in Superman III or Office Space. Each contributor would receive the theoretical exact amount.

There are at least 2 rust libs for mathematical operations with rational numbers, one of them a wrapper around GMP. I’m talking about internal representation only, used for math operations… user facing display amounts could still be decimal.

Math ops with rational numbers are slower compared to integer ops, but maybe worth it in the long run…?

I haven’t heard of any cryptocurrency doing this yet, btw. anyone know of one?


BUT you get rounding errors.

There is nothing wrong with doing financial stuff with fixed point integers. It is a well established part of computer science and financial institutions.

We discussed the issues with rounding errors and is one reason the industry uses fixed point integers


BUT you get rounding errors.

I think perhaps you are confusing rational numbers with floating point numbers. The former represent fractions exactly by storing both the numerator and the denominator and performs division as fractions, giving exact answers. The latter has rounding errors and approximations.

One would use rational number data type exactly because one wants to avoid rounding errors when dealing with ratios/division. (storing amounts as int does not fix the division problem).


We don’t need the former and the latter is enough. It is exactly 333333333 nanos in the fixed point integer representation implemented for safecoins. As said by @neo, there is no risk of rounding errors with this representation.

Yes, this isn’t 1/3 but we don’t need this value. And then why not also square_root(2), PI, e, ln(2), … which are not even rational numbers. All these are values that can be be expressed in some math packages like Maple but they are not needed in the financial domain.

Slower yes, worth no.

Not me, not even in the more general financial domain.


@tfa yes I understand perfectly well the integer representation, where decimals are calced only for display. I have built financial apps. I understand that is what cryptocurrency developers are comfortable with. I am simply thinking outside the box for a moment. And expressing the idea so at least it is “out there”.

I brought up the rational number data type because it is seemingly the best fit with a perfectly ideal reward distribution to all that contributed, no matter how small the contribution – as I described above. I am not saying we should use them, only that it is interesting to think about in that context.

A rational number reward is provably more equitable, even for a small number of participants. Let’s say we have a reward of 100 SafeCoin to distribute amongst 3 farmers. Using a rational number, each receives 33 1/3, and all is well. Using the integer representation, rounding must occur. I will use 2 decimal places for brevity. So the first two receive 33.33. The last one may receive either 33.33 or 33.34. If the former, that is “equitable” but each party has been robbed of a teeny amount they earned and a unit of currency has been lost/dropped. How do we account for it? If the latter, it is simply not equitable, as one party received more than the two for the same amount of contribution.

This may seem an inconsequential amount, particularly with 64 bit (or larger) ints, but to me it is still interesting to think what if we can solve it exactly instead of constantly fudging the numbers at various levels of code.

Slower yes, worth no.

that seems a value judgement / bias. To me, having exact numbers just feels “right”. The only reason NOT to would be if we simply cannot due to a technical constraint. The most obvious might be a performance issue. But it is also quite possible that the overhead of other operations (such as signature checking, hashing, etc) will be comparitively so huge, that int vs rational makes approx zero difference in practice for transactions. We wouldn’t really know until we benchmark both options, other things being equal.


Not as a direct response to this quote, more to indicate the scale of how I’m thinking - Spread Networks is many orders of magnitude more significant than a ‘pimped out machine’, having built a $300M fibre cable directly between Chicago and New York for high frequency trading, for a ‘tiny’ 1.5 ms advantage (taking a 14.5 ms trip to 13 ms).

This is the sort of stuff that will arrive at the SAFE network (ie large scale industry level developments). I’m not saying it’s a problem, but that it’s worth considering and designing the economy to work with these people so that their development ends up being to the benefit of everyone not just for the operators. I’m not suggesting it’s easy, but a really interesting thing to think about.

I’m sure there are many other examples, especially in microwave technology which doesn’t suffer from latency losses the way fibre does.

Interesting idea to think about, it’s a nice perspective to look at the problem from.

Not sure how this would work in the user interface.

Also numerators and denominators have some limit, eg u64, so when someone tries to have a wallet with very large unequal denominators it requires rounding, eg 1 / MAX added to 1 / (MAX-1) cannot be added because the denominator cannot become large enough.