Fraser's Safecoin Alternative Design (Postponed State)


An alternative SafeCoin RFC

GitHub Discussion

A bite of the safecoin

First, let’s define the form of these balances:

struct Coin { units: u32, parts: u32, }

The units field will represent whole safecoins, and since the defined upper limit of issuable safecoin is 2^32 , this need be no bigger than a u32 .

The parts field represents the number of 2^-32 -th parts of a single safecoin. Since the maximum value of a u32 is (2^32)-1 , parts will always represent less than a single safecoin.


So does this mean that the parts of a safecoin is a binary division? Just like a normal programmer would do?

As a programmer who has done the block with highly technical programming and also business programming I’d like to say in as strong terms as possible that binary division will be the thorn in the side of SAFE.

People do not think in binary and do not want 77309411/4294967296th of a safecoin. They want 0.018 instead.

77309411 parts using binary is 0.01799999992363154888153076171875 of a safecoin. So then the receiver is losing some of their 0.018 and real life ordinary non-technical people will cry foul again and again.

People do not think in binary, they want decimal. Whereas for technical programming binary is exactly what was wanted.

Can I suggest that the parts field is 1 billion (fixed point value) rather than 2^32 parts


This part is worrisome

The section’s farmed value will never be allowed to exceed the amount of coins for which that section is responsible. Ideally no section should ever get close to “running out” of farmable coins; getting the farming rate algorithm correct should ensure that. However, in the case that a section has farmed all of its coins, it will stop issuing any more until the farmed value reduces again.

You would run the risk if all coins were farmed of farmers becoming disillusioned and leaving that section. If no coins are recycled that belong to that section for any length of time then you may have an irrecoverable situation where too many farmers leave due to no rewards and either assuming the network is broken/failing or that its not worth farming any more.

Yes I know it is an unlikely situation to have all coins farmed but the solution to that happening should also account for what people will do when that situation happens.

The old scarcity in RFC 12 made running out of coins a virtual impossibility and if it did happen then the network was very old and large. But still did not address the above, except it was many times more unlikely to happen.

Nit pick here

When handling a received Credit , if the specified CoinAccount doesn’t exist, the coin will be recycled by decreasing that destination section’s farmed value by the specified Credit::amount . It would perhaps seem more intuitive to return a failure message to the source section, since that’s the farmed value which was increased, and hence it seems fairer to recycle the coin back into that source section. However, handling this would involve more traffic and more code, and such “unfairness” is likely to become fair overall when applied equally across all sections.

While highly unlikely there is a possibility that the “farmed” value in the receiving section could go negative. What happens then?

  • Does the “farmed” value stay at zero and the sent amount is just “lost” forever?
  • Does it indeed go negative and just wait till it goes positive after some farming is done from that section?

On receipt of a CoinTransfer request (after passing out of Parsec), the CoinManagers will deduct the amount specified in credit from the source account. If the account’s balance doesn’t permit this, the request will be silently dropped.

Why silently drop the request. If my wallet sent to another address and the request is not fulfilled then shouldn’t there be a message sent back to the requester (source ID) that it failed?

And related to the above, And another thorn in the side of SAFE if you just take the sent value

If the account to be credited doesn’t exist, the destination CoinManagers will recycle the coin by deducting the value from their section’s farmed value. The client will receive no notification that the transaction failed. (We could look to refund the source CoinAccount in such cases, but this would require more effort by the network. Well-designed client applications should be able to reduce the risk of accidental loss of coins in this way to zero.)

There is no ethical way that SAFE can just recycle the sent amount if the receiving ID doesn’t exist. (or even if there is another reason for failure)

The network cannot just recycle coins in these sorts of situations. The ethical way is to return the sent amount back to the sender. It wasn’t their fault that the receiving ID didn’t exist. They were given a payment ID to send the coins to and the one who gave them the receiving ID could be incompetent or malicious.

Each CoinAccount will have an associated fixed-length FIFO queue

Is this FIFO length able to be set by the user? I am thinking that if a shop selling a lot of items may need a FIFO that is large. But a person may want to remain more anonymous and only want a FIFO that is one transaction long.

Although for the person wanting to remain anonymous this point might be their solution/saviour

as the recipient will likely not be aware of the actual CoinAccount used by the sender to credit its own; only the transaction_id is visible.


f payment rates weren’t variable from section to section, we could omit the Coin field from the requests which MaidManagers send to DataManagers.

Actually won’t APP Dev rewards and maintainer rewards and Pay the provider rewards have potentially different amounts and thus will be necessary anyhow.

Instead of GetTransaction and Transaction , we could possibly use push notifications to notify the sending and receiving clients of a completed transaction.

This would solve the high volume shop and the FIFO size.

It’s unclear at the moment how to discourage spamming the network with CoinTransfer requests for tiny amounts

You could “lock” up the coin account for sending till the transaction is complete. This introduces a delay and for legitimate uses should not be a problem since the user is really doing only one payment at a time and the second or two delay before another payment is hardly going to be noticed.

That would limit the transaction rate for a coin account to somewhere on the order of 1800 per hour.

Its not like when each coin was an MD and to send 100 coins was 100 transactions. Your method the sending of 100 coins is only one transaction.


Unless of course they were doing it to spam the network. So I think if it was to be returned it would need to be minus a transaction fee.


Spamming can be done anyhow by sending the coin to another coin address which really is about the same work. If the coin address does not exist then the request is changed to send to itself and continue. Same work.

We cannot become unethical simply because there are those who can spam. So ethically it should still be returned and another method be worked out to stop spammers. If any fee is charged then it should be charged to all.

But if the idea of locking the sending coin account for sending while the transaction is progressing then this would not make any difference to the spam at all and the locking slows it all down anyhow.


Here’s my interpretation by way of analogies (might be a bit loose in parts, suggestions welcome):


The ledger is like a set of 2^32 pigeonholes. Each pigeonhole may contain a single safecoin. Each pigeonhole may have a lock on it which only the owner can unlock. Some pigeonholes have no locks and contain no coins. When the network is satisfied that a new coin has been earned, it puts a new coin in an empty pigeonhole and allows the new owner to put their lock on it. The current owner can remove their lock and change it to a new owner’s lock at any time with no history kept. Sometimes owners remove their lock, take out the coin in their pigeonhole and give it back to the network in exchange for storage space.

Fraser Alternative

The ledger is like a set of pigeonholes that each represent a user account. Every pigeonhole has a lock that only the owner has a key for, and a piece of paper inside recording the number of coins owned by the user. There are no empty or unowned pigeonholes (but some may have zero balance). More participants means more pigeonholes to look after. Some pigeonholes have a big balance, some have a small balance, it’s just a number written on the piece of paper that lives in the pigeonhole. A transaction involves reducing the number on the paper in one pigeonhole, creating a new piece of paper with the amount to transact, sending that new paper to the recipient pigeonhole, then increasing the number on the recipient’s piece of paper. The transaction paper may be kept by the recipient or can be discarded by them any time. There are rules about how much the numbers can be changed so they never all add up to more than 2^32. The sum of the balance in all pigeonholes is stored in each block of the datachain for each section, but contains no identifiable or verifiable history. Responsibility for pigeonholes is split among vaults depending on the name of the owner of each pigeonhole.

Blockchain (for completeness!)

The ledger is like a chain with links. Each new transaction is linked to some previous transactions, forming a linked chain of transactions. The total of the ‘lastest’ transactions gives the current balances of all participants. Existing links cannot be removed or changed. New links are added in batches every ten minutes.


Each section has its datachain and only has a record of the “farmed” coins and the total coins it is responsible for. And the section only operates on the “pigeon holes” that fall in the section’s address range.

The way you said it sounded like you might have meant that each section has a sum of the balance for all (network wide).

Yes it is changing the coin from being stored in 2^32 data records (MDs) and now being recorded as values in coin accounts, plus “farmed coin” vaule in the section.

The one issue that concerns me when reading the proposal was “What if one single mistake is made in the “farmed value”” This could be due to network segmentation for a period, or a compromised section, or a dreaded bug, or …


Great way to digest this, @mav! I think it shows how scalable the solution is and that it is not just a blockchain strapped into SAFENetwork - it would be far more scalable than that.

I will try and find some time today for my own feedback, but I see @neo is all over this already! :slight_smile:


I think we have to be conscious of how small any fees would be, relative to blockchain fees. The amount of resource required to complete and persist the transaction would be several orders of magnitude smaller.

Would any fees simply be in the noise unless you were undertaking 10,000s of transactions per day? If so, they would be as good as free for most users and it would be quite easy to illustrate that. It would be very much in the spirit of the original free transactions claim, imo.

Locks may be a solution, but it doesn’t stop people spending lots of dust transactions, trigger lots of lock checks, still putting load on the network.

I believe some networks are increasing the client side effort required, which may be a way to dodge fees and avoid locks. In short, making it computationally expensive to create a transaction. Perhaps having to get a resource off the network, then do something with it, would be a sufficient deterrent? E.g. get data at x, hash it, then write it to y.

Thinking more on the data hash, you could use this to throttle transactions, if y is checked by the network before it completes the transaction; trying to rush further transactions through would just cause rejected transactions, in addition to the using resource to create the transaction request.

Edit: obviously any write would have a cost so maybe combining them with the transaction request would be useful. However, it may then be harder to get the client to do something genuinely challenging. If they just fake it, it will still cause a check overhead too at the network end. I can see why fees are the default go to! :slight_smile:


Yes I agree.

Just that fee free safecoin transactions is a stated goal.

Computational fast computers will reduce this delay and may make say a RPi very slow to do ordinary transactions. Remember the very wide range of computational power between a RPi or phone and a 10 core i7


I hear you, but vanishingly small for average users is pretty close and it is simple to implement. Still, maybe there is a better way!


I hadn’t considered that issue @neo - thanks. If the frontend guys can’t just handle this rounding issue (and the marketing guys can’t get folk to think in binary :smile:), I don’t see a problem at all with just making the parts field represent billionths of a safecoin.


Why not just use a single u64, call the basic unit “nanoSafeCoin”, and declare a billion of these is a SafeCoin? There’s no rounding issue anymore and it’s much cleaner.

That aside, I still like “coin as data” better.


No no no, its more than a rounding issue.

If I sell items for 0.018 safecoin and get 1000 sales paid for, then I do not have 18 safecoin. And rounding is not going to help because when I see 0.018 its not really that.

Key Fact people are taught to think in decimal since they are knee high to a grasshopper and its really only us programmers and similar who can work with binary and not see a problem. BUT it is a real problem.

It is so so so simple to just use fixed point representation for the parts and just have 1 billion (which fit in u32) In other words you just have an integer from 0 to 999999999

But if you do binary and rounding then it will be a thorn in SAFE’s side forever.

There was a reason Banks used decimal fixed point representation for ALL their financial transactions. Very good reason. People cannot think in binary OR in rounded binary. CS101 too. Rounding cannot represent decimal accurately.


Or 4 instead of 1 billion. With 1 unit a quarter nano Safecoin.
Then you use 93 % of u32 range (0-4,3 billion) instead of 23% with 1 billion while staying human readable.


I think this approach also has the flexibility to cater for a similar farming rate algorithm so that as the number of farmed coins heads towards the upper limit, successful farm attempts become increasingly unlikely.

I’d thought I’d mentioned how to handle that case in the document, but I can’t find it with a quick skim! I’ll check more thoroughly later and update if need be, but essentially the answer is that a section’s farmed value will never be allowed to go negative.

I’m hoping that we might not have to deal with the scenario at all (if we get a good algorithm for farming rate/farming attempts), but in case we do, I was thinking that a section with farmed == 0 receiving a payment from a client would just send its own section-to-section message to pass the coin to a neighbour section which does have a non-zero farmed value to handle.

Basically because it’s cheaper for the network to handle it that way. I did consider that it’d be nice for client apps to be given a response, but it should be easy for them to make valid requests, not requiring the network to always send responses.

Also, having a client relying on a response isn’t robust enough here I think, since if the client happens to disconnect before the response is passed to it, the network doesn’t make any effort to hold onto the response for the next time the client does connect.

I think this won’t be an issue if the frontend apps “do things right”. If I give you an address to transfer to, it’s easy for you to check that a CoinAccount exists there before making the transfer. Accounts can’t be deleted, so if you check first you’re good to go. So, from that perspective, I’d say that it is their fault if the receiving ID doesn’t exist. Apps which permit such sorts of mistakes should be as popular as ones which just steal their users’ coins.

If there’s a lot of support for your view, then I think it could be implemented (I don’t see any technical reason blocking it in other words), but I think it’s overhead we could do without. For example, we’d also need to handle the case of a spammer just sending to non-existent addresses for fun.

Yes, I suppose it could be. I was thinking that the network might actually be able to dynamically handle setting the lengths of these eventually, but initially just having a hard-coded value (I know - I hate magic number too!). I’ll keep a note of this thanks.

I was more thinking of the variations from section to section. They should be slight, but I expect we may end up charging based on a calculation local to the section.

I’d prefer to avoid that if we could, since as well as the delay you mention, it would also require the extra step of the receiving section sending a confirmation response to the source section in order to unlock the account.

Thanks for the feedback @neo - some very useful food for thought there!


I wouldn’t be against this either, but I think it’s probably an implementation detail (it would be cheaper to serialise and parse and would result in slightly smaller messages for example). However, for the purposes of the RFC and being explicit, I prefer the Coin struct personally.

Is that because it’s such a differentiating feature which sets the SAFE network apart from others, or…?


Well, that’s what I was calling it a rounding issue. You’d actually have 17.99999992363154888153076171875 safecoin, so a friendly client app would just show that as 18.00 :smiley:

But I do agree with your point - I think you’re right.


It is up to the client to create and manage the key pairs of all CoinAccount s it owns. These could be stored on the network as part of its encrypted MAID account, or managed by a standalone wallet application.

Allowing the possibility for existing hardware wallets to manage our Safecoin keys is a HUGE plus.