This would seem more to do with transaction overhead, rather than a demonination size issue though. It still isn’t economical for miners to transfer single satoshis.
That’s right. It’s an issue with spamming and something safenetwork will have to overcome if it goes to nano safecoin
I think @neo has covered this in his reply, but basically we’re on the same page about being uncomfortable with a magic number.
Potentially the queue could be replaced entirely if we implement push notifications or something like instant-messaging for clients.
(I’d actually expected there might be some push-back against using just a random integer as the identifier for a given transaction - maybe too weak. I’d wondered about making this e.g. a short string and MAID signature of this, but that seemed like it might be overkill, and could reduce anonymity slightly)
I’ve actually removed this as a drawback now, since I don’t think it is
It came up when briefly discussing this proposal with the routing team (I believe it was @qi_ma who had the concern) and I thought I should list it since it had been mentioned. I’m happy for it to be re-added if we can justify it.
An alternative is to allow sections’ farmed values to go negative. In theory it shouldn’t affect the network as a whole, since the total which all clients can spend shouldn’t ever be able to exceed the total farmed by the network. However, I’d prefer to try and avoid allowing large imbalances between sections in terms of their farmable amounts.
I agree and perhaps suggest this RFC depends on such a device in the network. It will make things much simpler and less magical ![]()
Bitcoin is an example for the fixed point solution we’re discussing, but you brought it up as an example for how binary floating point math can screw things up for the foreseeable future. I just corrected you about that. Bitcoin in fact got it right.
Now, you’re right that we can’t utilize all that precision, but that’s for other reasons, the main one of which be bitcoin’s low value compared to the cost of mining. If bitcoin’s value will rise, we can expect single satoshi transactions. (In fact, the Lightning Network works with 1/1000th satoshi precision, so you may already transfer smaller than 1 satoshi values. I’m not following bitcoin’s development, so I’m not sure if the LN actually works at this moment.)
his is a wallet issue because at the protocol level amounts are expressed in satoshis
Yes you are right, I stand corrected. And thankyou for that correction, it is always good to learn new things.
I’d actually expected there might be some push-back against using just a random integer as the identifier for a given transaction - maybe too weak. I’d wondered about making this e.g. a short string and MAID signature of this, but that seemed like it might be overkill, and could reduce anonymity slightly)
Maybe make it customisable via the API call to create the coin-account so the client can customise. Thus for a shop they can set it according to their expected maximum volume and a ordinary person who isn’t expecting much with transaction to be just a few.
The cost (PUT cost) to set up the coin-account then becomes dependent on the FIFO length if it exceeds one MD. Mind you whats the network processing effort to have FIFO in multiple MDs?
An alternative is to allow sections’
farmedvalues to go negative. In theory it shouldn’t affect the network
As long as there were no extra (hacked) coins created then the negative amount cannot exceed the total farmed coins (+pre existing).
But I’d expect that it will be minimal with maybe a few exceptions at any one time since the spread of farmed coins should be relatively even if the chunk spread is fairly even and so a healthy amount of farmed-coins in every section and never go negative. Exception of course the statistically extremely small percentage that might go close or even negative.
I’d prefer to try and avoid allowing large imbalances between sections in terms of their farmable amounts.
This may happen anyhow in the extremely small statistically possible cases at some stage in the life of the network.
Bitcoin is an example for the fixed point solution we’re discussing, but you brought it up as an example for how binary floating point math can screw things up for the foreseeable future. I just corrected you about that. Bitcoin in fact got it right.
Yea, you can certainly chalk one up against me ![]()
So yes it is a good example of the use of decimal rather than a hybrid.
Also it is an example of having the whole value as one variable and not two. I still feel that having the whole value in one variable (u64 or even u128) is the best way to have it for all the simplification it brings.
Also it is an example of having the whole value as one variable and not two. I still feel that having the whole value in one variable (u64 or even u128) is the best way to have it for all the simplification it brings.
I still agree too! I just think choosing the right order of magnitude for the amount we call safecoin is important, as Bitcoin shows.
I still agree too! I just think choosing the right order of magnitude for the amount we call safecoin is important, as Bitcoin shows.
The advantage over bitcoin is that we don’t have the problems that bitcoin might face with there being a ledger.
Version 1 of coin-account can be
u64 - amount of safecoin parts owned by this coin account
Version 2 of coin-account can be
u128 - amount of (smaller) parts owned by this coin account
API can use the version number in the coin account to know if the amount stored is u64 or u128. And the API can have a version number in the calling parameters to know if the value specified in the call is u64 or u128.
The code can easily work with that by simply converting version values to version 2 in the operations and always store back into the coin-account version 2 values.
Or simply use u128 from the start and never need to change ever again unless we increase from 2^32 coins.
If we use u128 then I doubt anyone could ever want any value below the 18th decimal place
A UX problem with the u128 proposal could be that some people will use/show all 18th decimal places even if they don’t need them. You can’t show what isn’t there (yet), if you use the u64 proposal…
A UX problem with the u128 proposal could be that some people will use/show all 18th decimal places even if they don’t need them. You can’t show what isn’t there (yet), if you use the u64 proposal…
When then just have 12 places. Your pico and shoot me waste some bits
If you are talking of the 2 versions then the UI will shows zeros after the 9th place when showing a version 1 value. No problems nothing is lost or added.
As long as the first, shorter version also exists?
Because, to come back to my divide by 3 example: it could be difficult to prevent bad implementers to divide without stopping at a certain place.
You could get a 0,333333333333333333(0…) with the u128, but only a 0,333333333(0…) with the u64, if I’m not mistaken.
You could round that down to something less and show that 0,3333… like 0,333 in the UI and for most people (including me), no problem. There will probably be however a couple of people complaining: why is there something more behind that 0,333.
If that isn’t at least a small issue for you, then you’re inconsistent in my opinion.
I don’t understand your issue. When you increase precision then no one loses anything they already have and all new calculations are done with the higher precision in the core code.
So then its up to the APPs to upgrade. If not then people have to live with the precision that the APP gives them.
Your perceived problem also exists if an APP just uses 2 decimal places for all their operations. EG a shop APP only uses 2 (or 3 or 4 or) decimal places for their operations.
But the core is still operating with the full precision and automatically upgrades the coin-accounts to the new version.
The user loses nothing and can only blame the APP and not the SAFE network.
That is why I am still being consistent. Because I am only interested in what the core code provides and your quarter nano, brings the problem in to the core code itself and forces even the best APPs to be weird in their precision.
You argued for 2 better UX solutions:
- 1^9 parts instead of 2^32 parts: to prevent unnecessary rounding errors.
- don’t use nano quarters.
A third possible (probably lesser) UX issue, that for me is similar as the 2 previous ones could be that there will be a lot of situations where too much precision is used/forced upon the user in practice.
If you’re going to compare with real fiat money: you only had 1 Eurocents (nothing smaller) and now the production of new 1 and 2 Eurocents is prohibited, because a lot of time is wasted at the shop registers I assume.
In this Bitcoin discussion i also see ‘people should never need any currency division smaller than a US penny’.
So, one way to deal with this possible issue is to prevent that much precision in the first place (as long there is no practical need to have that much precision). You can always introduce an u128 later. That way most UI is already established only showing to nano level and extra effort will only be done if the extra added precision is really necessary. Also less protest because the need/pressure for extra precision at that moment.
Maybe it is better to use an u128 from the start, but some more discussion about this topic could be interesting. Also, a 128bit integer could be a problem for some libraries (probably not a big issue).
If you’re going to compare with real fiat money: you only had 1 Eurocents (nothing smaller)
Not financial systems though. They have much more precision. I buy electronic parts from distributors and often lower price parts have precision to 1/1000 and even 1/10000 of a dollar per unit.
We are in a world now that increasingly is seeing call for microtransactions.
So why not?
So one way to deal with this possible issue is to prevent that much precision in the first place
So are you going to call for 2 decimal places? What about if safecoin reaches 10$ then you cannot get 1 cent worth of fiat. What about then 3 places, but then what if safecoin goes over 100$. Where do you stop.
My personal thought is that for a decade or two nano safecoin transactions will still allow micro fiat equivalent transactions, but I maybe wrong. So I am agreeing that we may have to allow further (true) decimal places and in answering that we have 2 options, 2 versions or just go 12 or 15 places up front. Maybe even the whole 18
Maybe it can’t work this way or there are better ways to deal with this problem.
But if you create an u64 coin account, you enforce that your account won’t have unnecessary precision (to a certain extent).
For the ones who really need the extra precision: they can use the u128 coin account.
But if you create a u64 coin account, you enforce that your account won’t have unecessary precision (to a certain extend).
For the ones who really need the extra precision: they can use the u128 coin account.
Although it will not be up to the user. It’ll be up to the current version.
Display of amount is up to the “wallet” APP and can show how many or how few the user wants. A properly written APP will have user display options, otherwise it is just a trivial APP
For a user who only wants to see 6 places then its just a matter of rounding down. (ie just show the first 6 places) If you round up then the user may think they have more than they have and try to spend more than they have (eg have 1.999999999999 and try and spend 2.00). Farming can still deposit miniscule amounts (below the display amount) and they will simply add to the coin-account balance.
EG if display is set to 2 places.
- account == 12.345678900 display == 12.34
- Farming reward == 0.0001
- account == 12.345778900 display == 12.34
If SAFE decides to go the way of 2 versions then the data structure for coin-address only has to ensure there is room for expansion. I’d say this won’t be a problem since so little data is needed and there is plenty of room in the coin-address area. (eg a MD)
Why not just use a single u64, call the basic unit “nanoSafeCoin”, and declare a billion of these is a SafeCoin? There’s no rounding issue anymore and it’s much cleaner.
This argument posted today makes the same case from a usability perspective (using a Satoshi as the base unit and then SI prefix denominations in the Bitcoin world). Quote from article:
The Satoshi as the base unit allows us to scale it up and denominate everything in Satoshis. More importantly, it provides us with a good reference point as humans are able to better comprehend larger integers than tiny decimals. It is simply easier to understand 500K sats than 0.00005 BTC or 15M sats than 0.015 BTC (or 15 mBTC). As Bitcoin’s price increases, it makes even more sense to adopt the Satoshi base unit. Today 1 USD is equivalent to 15,000 sats or 15K sats meaning a cup of coffee is 30K to 45K sats. Tomorrow these numbers may reach the 100s.
We should work towards a single base unit and colloquially denominate in SI prefixes (10³) upwards when appropriate. We already do this everywhere!
It is a good argument and I think we have witnessed Bitcoin denomination naming getting itself into knots by not doing this. A multiple notation of a single, small, denomination is just much easier to articulate and comprehend.
Edit: is a good argument I meant! Ha!
Whenever one strays too far to the left or right of the decimal place the human mind gets befuddled and the notation clumsy. The SI pre-fixes help fix this. I will admit that moving to greater magnitudes up from a base unit (kilo < mega < giga < tera) may be more appealing and convenient than moving down (milli > micro > nano > pico) due to the fact that computer hardware lingo has burned the kilobyte, megabyte, gigabyte, terabyte terminology into the public mind.
The single uint64_t is the simplest way to maintain the concept of 2^32 whole “safecoins” with a reasonable level of divisibility as Joe and others have pointed out. The problem however is the real world / intuitive analogy to real physical coins, which once minted are never intended to be physically divisible, unless they are melted down and reminted into smaller denominations from time to time. (One of the appealing things about the original safecoin whitepaper is that it made use of this familiar/intuitive concept.)
I’ve mentioned this before but I think an alternative viewpoint which melds well with Fraser’s use of “units” and “parts” is to think of “safecoin” as a physical material like tungsten or gold, where divisibility comes from the measures used to describe for the amount of this safecoin material one owns. For example, instead of owning 10.102 “safecoins” a user would own 10.102 “units of safecoin”, much like how one can have 10.102 kgs of potatoes, 10.102 grams of gold, or 10.102 liters of water. From this perspective, it is easier to use all of the SI pre-fixes and talk about sending someone 10 pico-units of safecoin, or 10 mega-units of safecoin. This too isn’t without problem since the name of the currency is typically used to define the unit measure, such as 1 S, or 1 BTC, or 1 USD. Sending “10 pico-safecoin of safecoin” is kludgy and doesn’t quite have the right ring to it. The term “safe” isn’t quite so bad as a unit measure… I wouldn’t mind receiving 10 kilo-safe or mega-safe of safecoin, would you? I’ll readily admit that this type of terminology tends toward being overly complicated though… just wanted to throw it out in the brainstorm.
A multiple notation of a single, small, denomination is just much easier to articulate and comprehend.
This is exactly what he is arguing for? mBTC, Finney… etc is a confusing mess and hard to comprehend in comparison to using a single small denomination with SI prefix denominations.