RFC 57: Safecoin Revised

When looking at the total amount of BTC assuming receivers managed to be pure hodl all the time, yes, the amount is huge.
On the other hand, if you assume this amount spend continually, the effective value would be a waaaay lower. Its worth mentioning most bitcoins were mined at the time they had almost no value.

1 Like

Good point, however I think that is OK as long as both do what they should. Node age and relocate etc. will separate them into different sections, Elders in their own section will separate them as well and so on. It is related to Sybil prevention and works in this case as well. If nodes do that I think we are fine, they may be taking risks though.

I think it could be seen from different angles. I am replying to a huge number of posts right now (internally) so wanted to give some context here. Hope it helps.

3 Likes

More than 10% of them at least a few years back IIRC

1 Like

But in SN we have a more stable reward, there’s no halving.

4 Likes

Good point and should be considered more. Perhaps dev rewards should be related to available coins left to be farmed or similar? Worth digging into that

10 Likes

This post was a lot longer but I found a lot of stuff answered itself as I wrote it. At the risk of stating the obvious, this is a very detailed and nuanced rfc!


From Unresolved questions:

“we currently require payment for the creation of a new CoinBalance instance, whereas here the vault would need to create one without being able to pay for its creation.”

Some options

  • Elders pay for it. Elders already have both higher income and higher costs. It’s a tiny amount of extra work/cost to pay for a new wallet for the new vault as part of the overall burden of a new vault joining. Could be baked in that if the new vault CoinBalance already exists then the elders credit it for being a well prepared citizen.

  • Have the ‘from’ field be the aggregate section bls key which is accepted even though it has no balance. The check for valid creation becomes a two-stage test a) ‘from’ is able to pay or b) ‘from’ is a section.

  • Have elders act as an escrow, and when the new vault has earned enough to pay for CoinBalance creation the elders create it. This may involve having an actual CoinBalance for the aggregate section bls key. In this way Elders act as a ‘farming pool’ for very young vaults, aggregating funds like a bitcoin mining pool. But the purpose of the pool is not to reduce variation in rewards, more to ensure a correct sequence of operations in the vault setup phase.

I think the second option is simplest. Would be interested to hear more ideas.


In the Safecoin Transfer section, how is uuid for a CoinTransfer.Credit.transaction_id determined?


In the Account creation section, it seems most of text belongs in an Account Structure’ RFC. These details seem to have no impact on safecoin or farming. I think some mention of this data structure is useful to add context to CoinBalance but the details don’t seem to belong here. Probably the first two sentences of this section is all that’s needed, and also “any updates are free of charge.”

A minor point for sure, but the document is pretty large and any way to reduce cognitive burden is a win.


A question to ponder about the StoreCost algorithm…

Should the network measure “full vaults” or “spare space” and what’s the difference?

On one hand, full vaults is really what needs be avoided, but on the other hand spare space is the metric with the best resolution for allowing it to be avoided. Full Vault is post-problem, Spare Space is pre-problem.

Another question…

Should StoreCost for a network of 10 sections with 1 full vault each be the same or different to a network of 10,000 sections with 1 full vault each?

On one hand the portion of full vaults is the same for both networks, but on the other hand the larger network has a thousand times more spare space than the smaller one.


With overall 2 × UploadCost reward, it could be that vaults would want to upload data to their section to get more rewards. If every vault did this it might pay for itself but it becomes a prisoners dilemma problem. Just touching this to see if it tickles anyone.


Could be a maximum reward scenario when one vault is not full and all others are (maybe only briefly? maybe all only 1 chunk over-capacity?). This would mean nearly 2 safecoin per PUT with 2× reward totalling nearly 4 safecoin reward per PUT to be divided into the section. Not sure how achievable or dangerous this scenario is but again wanted to touch on it to see if prods anyone.


“reward will be divided as follows”

single_node_age = if no associated CoinBalance::owner { 0 }
                  else if flagged as full { node's age/2 }
                  else { node's age }

I think an extra line might be good here to account for the additional workload of elders that adults don’t have (especially important for tie-breakers among adults for eldership):

single_node_age = if no associated CoinBalance::owner { 0 }
                  else if flagged as full { node's age/2 }
                  else if elder { node's age + 1 }
                  else { node's age }

I’m interested in the Full Vault details…

Are chunks always attempted to be stored in full vaults to give them a chance to become recognised as unfull? Or does a full vault get automatically bypassed and the chunk redirected to the next nearest nonfull vault?

Will there be an option for a full vault to ask for relocated chunks to be returned to them, eg an Unrelocate request?

Is there some time after which the full vault is killed? Or can full vaults live on the network forever? Or until their next relocation?

Can the list of relocated chunks keep growing forever or is there some limit to how many chunks can be relocated?


Has the disallow rule been replaced by number of full vaults?

Seems to be covered in Future enhancements section:

“accepting a node to a section will revolve around 100 nodes capable of storing data, full nodes will not be counted as part of the section recommended size of 100”

and

“Each section will aim to maintain a minimum ratio of 50% good nodes”

So… full nodes don’t count toward the 100 target, but they do count toward the 50% good.

Does this psuedocode seem right?

allow new vault?
    less than 100 good -> yes
    more than 50% full -> yes
    else -> no

“accepting a node to a section will revolve around 100 nodes capable of storing data, full nodes will not be counted as part of the section recommended size of 100”

“Each section will aim to maintain a minimum ratio of 50% good nodes”

This gives some expected ranges of section size.

100 good nodes is the target. At least 50% good nodes means target between 0 and 100 full nodes.

200 good nodes is roughly when a split will happen, which means between 0 and 200 full nodes in a big section.

So the likely biggest section would be 400 nodes (200 good, 200 full).

Some StoreCost calculations

G    F    N    SC
100    0  100  0.0100
100  100  200  0.5100
200    0  200  0.0050
200  200  400  0.5050

StoreCost will almost always be roughly between 0.005 - 0.5 safecoin per PUT.

Does this summary sound right? Am I missing some details?

13 Likes

As I said later when (underflows)/overflows occur from/to parts

And yes i was using “operation” as a set of instructions that is performed on each variable and the set of instructions can include the overflow/underflow portion.

Whereas the one fixed point integer has basically one instruction, not just the one operation.

No as this is a avenue for attack to drain the elders by spamming.

The solution is to have zero cost creations (PUTs) for certain privileged data, like coinBalance creation of a coinBalance

A coinBalance is created when the Vault has earnt enough to put into it. Thus prevent spamming of coinBalance creation

“Full Vaults” seems odd to me too since we never want full vaults. And you seem to agree.

My thinking is that one full vault indicates that other vaults may also be near full and a potential for a cascading effect where a small amount of uploads may cause many times more vaults to be full and the consensus chain (is it still called the datachain) grows rapidly as going full causes more decisions to be made and consensus events and this causes almost full vaults to become full. Worse case is that 5% of vaults are full and 50% are one block/chunk away from being full and say another 40% are very close. The random distribution has worked very well to cause even distribution (rare I know) and thus vault after fault goes full with each chunk uploaded.

The sacrificial chunks allowed for a much better indicator since it is harder to successfully fool as to how much free space is (almost) guaranteed to be there. There maybe a lot more but the network only needs to know that there is enough spare space.

6 Likes

I’ve been asked to limit my time in this thread, so I’m afraid I won’t be able to answer individual points that I’ve seen in here. But I’ll try and get across a couple of points while I can.

Firstly, I already answered @happybeing on the dev forum so that might give some pointers :slight_smile:

@riddim On the issue of the issue of the type of encoding, I would probably favour plain base-32 myself over z-base-32 since it’s a better known standard, and less likely to have buggy implementations, but that’s just a personal opinion.

On the issue of the Coin structure, I’ve created an example to try and get my point across more clearly since I think a few people were misunderstanding what would be exposed to app devs or users. Please have a peek at the generated docs and let me know if you still see problems there. As of now I haven’t had time to include functionality to allow for simple arithmetic operations on the various types, and those will likely be worth discussing too (e.g. should we have a couple of ways to add two Coins, one which panics if the result is too large and another which returns a Result?)

Interesting ideas, all three! I suggested another alternative in my reply on the dev forum, but the gist was: I imagine we could have a different RPC to create a CoinBalance which would be used for the sole purpose of receiving farming rewards and which would be free to create. While it has earned less than the amount it normally costs to create a new CoinBalance it’s flagged as receive-only or something like that.

This is probably more like your third option, which appeals to me too. The main downside I can see of using an aggregate section key at the start is that once the balance is high enough, the section would have to transfer ownership of the CoinBalance to a singly-owned key which the new vault has created. The section would also need to keep transferring ownership to the new aggregate section key as the section changes.

The idea is that it’s up to the clients to decide. If e.g. a vendor wants to just derive it from a sequential invoice or order number, it can. The vendor provides the required UUID to the customer along with his CoinBalance address and invoice amount, and the customer does the transfer using those values. Using that known UUID, both the vendor and the customer can get a “receipt” from the network by calling get_transaction(vendor's address, uuid).

If you’re just transferring an amount of safecoin between two of your own CoinBalances and don’t care about a “receipt”, the UUID can be completely random.

I agree about the appeal of using Spare Space, but it has its own problems. It’s easy for a malicious vault to lie about that, and even for good vaults, they’d have to control the amount of space they’re advertising or else the amount of disk space they can actually use could drop to zero more or less instantly (e.g. the user fills his disk by copying over a bunch of videos).

7 Likes

All good :slightly_smiling_face: glad to hear this (… Because the only upside I am seeing is a (maybe) better readability of the last character… Depending on the byte length of the encoded string (I won’t count because I refuse to invest more thinking on this…) … While the first characters are fixed because of the cid and depending on the ‘readability score’ for them that might even in the logic of base32z make base32 encoded xor names more readable than base32z encoded ones… So latest after talking a bit about what it is and where it applies it should be clear now that it’s a a bit questionable choice… [and that there are even encoding errors in the example of the paper this doesn’t make it look better…] )

… In the end people will just use it and only the ones implementing language specific bindings will be asking stupid questions and all others will just use it and not even recognise… So I will try to really say no more about this topic at all now …
… So this is a topic that (by far) doesn’t deserve all this thought I guess :slightly_smiling_face: there is way more important choices that need to be made…

1 Like

@Fraser, Just in case you did not realise fixed point integer maths is just integer maths. The fixed point only applies when the value is to be displayed. And fixed point operations is just integer operations no difference. The difference is when the value is to be displayed or converted to/from strings or displays

I still do not understand why you think its better to hold coin balance as 2 distinct variables. Adding with fixed point becomes as easy as c=a+b and checking for overflow, no need to check for milli, or micro, or nano.

For string being supplied - Fixed point looks at the string and adds zeros to fill 9 decimal places and remove the decimal point and you have the string version of the fixed point number, and convert string to integer value to store.
eg supply 1.2345 add zeros to fill 9 places => 1.234500000 then remove decimal point => 1234500000 and convert that to an integer and store it as coin value in u64

For string to be returned, convert integer to string and add decimal point at 9 digit point.
coin value integer is 1234567890 convert to string =>1234567890 add decimal point at 9 digit => 1.234567890

Let the front end accept or supply an integer variable value and no need for back end to know of strings. Have string interface if you feel its necessary along the lines of above and if you want milli, micro, nano then simply truncate string and check if cut off digits is zero or not==error.

Then the back end is doing integer maths on the fixed point integer, which is exactly the same as just doing integer maths. So please explain why keeping the coin balance as 2 separate values is easier or quicker or conceptually better then just storing the coin value as one integer that represents value in nanos? (for both back and front ends)

2 separate values means multiple operations to just add two coin values together. Integer value in nanos is one ALU add operation, a single machine code addition. 2 separate values will be conceptually strange and large number of machine operations.

13 Likes

Great job getting this out! It’s taken a while for me to digest this, but a few comments/questions:

  1. Mav touched on this, but the store costs are too high if we are still talking about 1MB PUTS. At 0.005 (the cheapest they can be) that is 0.5c/MB at $1 SAFE. $5/GB will only allow very limited use cases, and worse is linearly dependent on price which we all want to be higher and not artificially capped by killing use cases. Note this is the best possible case and can only be cheapened by a lower market price for safe, not by adding capacity, which seems wrong at a fundamental level.

  2. I think the formula fails because of 1, and because the minimum is set entirely by the exogenous choice of number of nodes per section and not any performance considerations. As Mav noted (Full/notFull) doesn’t give enough fine grained information about available space (and could only be used to infer actual available space if size distribution was known, and the section was almost full already).

  3. The price may jump quickly (double) during section splits…

  4. Having a section specific price seems to add issues with ‘shopping’ while the network is young (it should even out in a big network)… why not simply keep reencoding data with some kind of new nonce (and getting a new section) until some section reported a low price? I could be off here…

It seems a design principle for the formula should be as independent as possible from the choice of number of nodes/section, and be able to go arbitrarily low (and possibly high). To achieve this feels like it requires greater insights into the amount of data storage available on the nodes… possibly a success/fail message could also include the used/remaining space. There may also have to be a thought for if and how vaults will be able to dynamically add more space. One possible solution from control theory is to target the rate rather than the value… if you had an estimated available space you could calculate the time to fill it at storage rates over say the last week (a reasonable time frame to add capacity) and base the price off that… some thinking needs to be done here in any case. We probably need a simulator to calculate pricing for a number of formulas at different network sizes/conditions.

The design for safecoin is in any case almost simple compared to getting the network incentive right. The switch to ‘section’ farming rather than individual farming has pros and cons… pro: less reward variability, simplicity in implementation. Con: it seems to eliminate some of the incentives to provide the fastest nodes possible and rather incentivizes uptime exclusively. Maybe this is good, maybe bad, but there is in any case one incentive lost.

Great work. Can’t wait to follow the development here.

11 Likes

The target should be to set algorithm to be able in early stages to have about 1TB per 1SafeCoin. As it looks like very tiny reward for farmers, it should lead to “re-invest” all earns back to network, while $ price would rise and 1TB/1SafeCoin ratio would be getting lower.
If that would be set before beta release, we should see also adequate makret price rise to reflect real price of 1TB forever storage with unlimited bandwidth.

Edit: And as side effect should be less or rare vaults from big datacentres, when profit would be very speculative.

1 Like

I’m struggling to understand this too.

One u64 integer is just super simple and industry standard. Easy to store, easy to represent, easy to sum. Why complicate things?

8 Likes

If SAFEcoin ever got to $10000 per coin then nano safecoin is 1 milli-cent

Does anyone have any objections to this smallest value of SAFE coin if SAFEcoin ever got to $10000

I am thinking of those who want micro tipping or transactions.

Its doubtful it will get to this value in the expected lifetime of this evolution of the SAFE network. (say 20-50 years).

Personally this is good enough for me.

9 Likes

Digging more into the RFC, I feel the farming rewards formula also is likely to need to be revised. Aside from my previous criticism that response speed (and bandwidth) are unrewarded, it also appears that provided storage has no impact. Thus a vault providing the minimum to meet the resource test and throttles its bandwidth thereafter gets the same reward as a multi TB array at the same uptime. Incentivizing more vaults adds decentralization, but at the expense of increasing the expected number of hops and thus likely network responsiveness for messaging and other applications. I thus posit that there should be some counterweight at least rewarding for either offered or used size. Perhaps a multiplier of ln(offered_size/minimum_for_resource_test) to the weighting.

It may add a bit of complexity, but should be fairly simple to monitor…and I suspect the storage reporting is necessary for an efficient and stable StoreCost algorithm anyway. if a node advertises x amount space then it should never reject an x-1 sized request to store or it should be punished.

5 Likes

The target should be to set algorithm to be able in early stages to have about 1TB per 1SafeCoin.

I’m thinking of a couple alternative formulations, but suspect store cost should be in the format of min_SAFE_divisibility*(1+calculated_network_premium), e.g. the minimum cost is whatever SAFE’s equivalent of a satoshi is (nanoSAFE sounds good to me).

5 Likes

I have to agree. This seems to be a weird relic of the past, almost like some unholy marriage between the old type of farming (discrete coins) and the new type of storage (balances).

Talking about the basic units of safecoin, I’m not sure how I feel about this particular detail:

The parts field represents a multiple of “250 pico-safecoins”, i.e. the number of 250 * 10^-12 -th parts of a single safecoin. The total value of the parts field will be required to be less than a single safecoin, i.e. it will always be less than 4 billion.

250 pico-safecoins, seriously? What sort of black magic is that? Four billion is the largest round number that can be represented on 32 bits, right? So, just a random artifact, elevated to a position it clearly doesn’t deserve. Ugly.

But I get it. It must be an attempt to ensure the total market cap remains as advertised, right? I say, screw that. A single number with 2^64 units makes a lot more sense, it’s easier to work with, and I bet my uncle’s top hat that we’ll end up with it even if starting out with the split balance because the MaidSafe folks have always been ready to throw away way less idiosyncratic things with way more work already put into them, and I don’t think this will be the exception.

Maidsafecoins could be just exchanged to 2^32 of unit-safecoins to get the same value. Some children will throw a tantrum, so what.

10 Likes

I agree that non decimal units are probably a bad idea, certainly in a user interface.
But if they would say instead of 250 pico coin, a quarter nano coin, that sounds a least a bit less strange.

1 Like

With all due respect, that’s still just a stupid artifact, not a conscious design decision. I mean, nobody wakes up and goes, "Hey, wouldn’t it be great to set our basic unit of value at 250 pico-something?"

3 Likes

I dunno. It has a nice ‘Old Money’ nostalgia about it for Brits of a certain age. I can just about remember this.

Money was divided into pounds (£ or l in some documents) shillings (s. or /-) and pennies (d.). Thus, 4 pounds, eight shillings and fourpence would be written as £4/8/4d. or £4-8-4d. The “L S D” stands for the Latin words “libra”, “solidus” and “denarius”.

There were
20 shillings in £1 - a shilling was often called ‘bob’, so ‘ten bob’ was 10/-
12 pennies in 1 shilling
240 pennies in £1
Pennies were broken down into other coins:
a farthing (a fourth-thing) was ¼ of a penny
a halfpenny (pronounced ‘hay-p’ny’) was ½ of a penny
three farthings was ¾ of a penny (i.e. three fourth-things). There was no coin of this denomination, however…
3 Likes