A bite of the safecoin

There’s been a lot of discussion about the divisibility of safecoin, but instead of starting from the position that there is a fixed maximum number of safecoins (2^32), there could be also other options, perhaps even more intrinsic to SAFEnet architecture.

Key definition: money = accounting unit.

What is the main thing that safecoin measures, keeps account of? To my understanding, primarily physical memory available to the network as well as it’s use.

Key question: how self aware is SAFE network? Does it have, at a given time, at least some rough estimate of the total amount of physical memory available for the network? Or is the “visibility” and consensus negotiation results of nodes restricted to No Self-Info Beyond 2^32 address space? Noob question for which I’m very curious to hear the answer.

OK, if the answer to key question is ‘yes’, SAFEnet has self-awareness of available total amount of available memory, and can count a moving average, we could tentatively suggest that a safecoin would be fraction 1/m (instead of 1/2^32) where m=memory available to SAFEnet. Or rather, this would be the starting point of fine tuning the safecoin algorithm, with different values for the theoretical maximum of 1/m (with required scalars), which I assume is required for the farming algorithm, as well as for safecoins in active circulation, with information about owner and previous owner attached.

Having m as dynamic variable instead of fixed constant, if possible, would solve the problem of divisibility IMHO in dynamic and elegant manner, as the growth in users and available memory (and other resources) would be direclty reflected in the basic safecoin algorithm.

Even if workable, I’m not certain what the advantages and disadvantages are. However, Maidsafe has already made promises to it’s stakeholders with regard to the finite number of Safecoin, so I reckon that part is off the table - although one could always fork the network if this is a really advantageous proposition - I don’t know if it is myself.

The section only has knowledge of the space used by itself. It uses this as a global indicator for the purposes of determining farming rate etc. Even Spare space in a section is not completely known. The current RFC has sacrificial chunks which give an indication that there is at least a certain amount of spare space.

Once you have a variable amount of total possible number of a coin then the market has no certainty of the fiat storage cost. If I had 100 (whatever) then what is that worth when in a month the total amount of coin possible is a lot more. So what would be the use of holding coin. And of course those who bought MAID believing that MAID would be 10% of the total possible might find early on that a scammer has flooded the market with storage so the total possible coins massively increases and the scammer buys up all the coins they can cheaply then removes their storage and the total possible supply drops dramatically and they profit big time.

Once you set the total possible coins to something that people can manipulate (even by 0.1%) then they can game the system.

BTW Fraser provided a good way of implementing divisibility to 9 places

4 Likes

I don’t know either, and the main reason to post this idea is to learn more and find out what others think about doability and theoretical pros and cons. If and when we get a good system of data transfer, I see no problem with whole ecosystem of SAFEnets with more or less different parameters coexisting and communicating and coevolving. I don’t think the ultimate purpose of decentralization movement is to forge the One Ring To Rule Them All… :wink:

The most obvious pro of 1/m vs 1/2^32 is that you don’t lose the amount of meaningful information bits per/user as amount of users grows, as well as total memory capacity and memory usage grows from exas to zettas and yottas etc. Another pro that comes to mind that this sort of accounting unit could keep continuous track or available memory and it’s usage in very clear (and economically meaningful) form for all users.

As for economic terms inflation-deflation, if and when they are used separately from per/capita, hoarding level, active circulation aka velocity etc., I don’t find them useful at all but mostly ideological blinders of the seriously mathematically handicapped.

This is related to a noob question concerning the “permanent storage” of data. I consider it safe to assume in practice that available data storage is always a finite number, and therefore infinite growth of permanently stored data (however well compressed) is not theoretically possible. And in various catastrophies can drop storage capacity very quickly below 1/8, 1/64 etc of what was available a millisecond ago. What happens to SAFEnet data in cases where there is more data bits than storage bits available?

What’s the use of holding a coin, generally? From systemic point of view the use and purpose of coin/token is to lube the system and inform participants of supply and demand in most efficient distributed and decentralized way possible, informative value of accounting unit is tied to velocity of accounting unit. The Scrooge McDonald fetishism of usurous coin holding by a subject of system is psychological product and mechanism of the old system of monopolistic state fiats with primary function of taxation and redistributing wealth from productive classes to elites who control centralized social book keeping (money creation etc.). Scrooge McDonald Hoddlism is product of artificial scarcity of money/tokens/accounting units in centralized systems.

That said, of course in human size system some level of inertia (holding and diversity) conceivable to human psyche is necessary. If we have a rational fraction informing participants of total average available memory in a given time span (hour/day/week/month), relating that rational fraction to cost of buying storage is a matter of rather simple arithmetics. I’m not sure but reasonable hypothesis is that a market which is better informed about fundamental properties of the market platfrom, including also e.g. historical data of growth data of total memory capacity as well as projected future growth with simple enough open source algorithms, is a much more reliable and predictable market for a greater majority of market participants.

Standing up to previous commitment is of course required. But we can at least question, could it be possible to stick to the 10% of 2^32 at launch but gradually/after a while move from fixed constant to system inherent variable fraction, if there is consensus agreement that such would benefit all participants in win-win manner (e.g. increased rate of spread and efficiency of SAFEnet). I don’t see this as an unsolvable math problem.

But if this issue is already settled and decided, we can keep this discussion purely theoretical and hypothetical, no problem. :slight_smile:

Well if drops to 1/8 or lower than there is going to be data loss since there is not enough chunks to assume many files won’t be broken. It is going to be a feature where the chunks in a vault are verified for those nodes that go offline for some reason and come back online intact. Thus restoring their chunks and in the case of a massive loss of nodes this should mean all files will be restored.

If vaults become full then they refuse to store data. So massive outage like you suggested then the remain chunks may very well not have their 8 copies since all the vaults will become full and no one can store data.

A lot. I thought this would be the general case.

For instance if I want coins to store my collection which could take months, then it makes sense to purchase what I consider a suitable amount in one go.

There are those who purchased MAID and most likely will not use up all their coins for years. And some will be selling their coins to those new and needing coins to store.

The coins will be used for buying and selling whatever on the network and some stable knowledge of supply is needed and one that cannot be manipulated by simply spend 1000’s in some data centres and add a few PB of storage for a month. Thus making a mockery of a coin whose max supply is determined by available storage.

Definitely and welcomed if as you’ve done, and that is in a reasoned and constructive manner.

In this case I believe that the crowdsale terms will require that it be a fixed amount. Now I think its possible if something like @anon86652309’s idea for the coin is adopted then it could be any suitable high number of max coins as long as the crowdsale people get the appropriate number of coins.

2 Likes

Here’s the basic problem. Let’s assume, for the sake of simplicity, a system of total 32 coins and rule that each user needs to be able to hold a coin and earn a coin from farming. What is then the upper limit of different accounts in a 32 coin system?

As farmer accounts are just not unique humans, but rather multitude of different devices, and I assume that we prefer much more than average of 2 meaningful bits per account, the fixed upper limit of 2^32 is really not that big, and 9 more digits doesn’t change much if the systemic growth of total data in world keeps on growing more than 2x/year.

Another math question, what do you consider a reasonably good average amount of meaningful coin bits per user (0-hekto, 0-kilo, 0-mega, 0-giga safecoins per account?) , and if amount of farmer accounts doubles every month (or year or take your pick), how fast does the fixed limit of 2^32 (or 2^35 or even 2^64) safecoins chortle and stop the overall growth of the system?

I think we need to give very careful consideration to the fact that we are not building another bitcoin clone, but new Internet with secure access for everyone, and we are aiming for and expecting and preparing also for rate of growth of transactions unimaginable for any crypto before, and in this respect fixed constant of upper information level of token lub instead of variable fraction seems like shooting ourselves in the leg as we are preparing to jump a big leap for all of humanity.

The coins will be used for buying and selling whatever on the network and some stable knowledge of supply is needed and one that cannot be manipulated by simply spend 1000’s in some data centres and add a few PB of storage for a month. Thus making a mockery of a coin whose max supply is determined by available storage.

That is a very fair point and no such mockery has been suggested. There is already in place the the sigma curve(IIRC?) algorithm for more even distribution of farming yield, and even with that, the total amount of storage is just starting poing of carving out and fine tuning a win win algorithm for a dynamic total of token lub which doesn’t become a self-inflected obstacle of growth for the system as whole.

What is the level of self-awareness of the autonomous SAFEnet, and what kind of meaningful information can it offer about it’s holistic state that can be tied with dynamic safecoin algorithm that is most win-win supportive of the system as whole?

1 Like

The mockery is by the scammers who can manipulate the coin’s total supply and not of anyone in the forum etc.

Also have you considered that safecoin is destroyed every time it is spent so that in effect there will be many many and many times 2^32 coins created over the lifetime.

The actual number of coins created is connected to the used space on the network. This is important since it does cover some of the things involved in your suggestion. Every time someone buys resources the coin (or part of a coin) is returned to the network and is destroyed. So as more data is stored then so is there more coin destroyed reducing the coin existing and more coin can be created (again and again and again :slight_smile: ).

Also it is reasonable to consider 4 x 10^18 parts of a safecoin should be enough for the population of this world, even at 50 billion (Fraser’s idea of divisibility)

Now the discussion on divisibility also is considering using 64 bits for division which means there would be 4x10^27 parts. Now tell again how many atoms there are in the earth :slight_smile:

3 Likes

Really interesting topic. While answering I thought about it a bit more and deleted all I had written.

I think the point of divisibility is to make it practical for every-day use for small-value transactions.

Maybe looking at bitcoin could give us some insight into what to expect for safecoin values (roughly).

If we look at bitcoin, we can look at the smallest unit, a Satoshi.
There’s an upper limit of 2.1*10^6 bitcoins and each bitcoin has 10^8 Satoshis. Thus, bitcoin consists of 2.1 *10^14 satoshis.

One satoshi is currently worth 6.34 * 10^-5 USD.

How many bitcoin users (actual regular users) do we have? Maybe a million.

So if bitcoin goes to 1 billion (factor 1000 increase), how will that affect the value of the Satoshi? It’ll likely go up way more than 1000. Let’s say it goes up by 100.000, (1*10^5) then 1 Satoshi would be worth 6.34 USD. So when we approach full adoption, the Satoshi would likely be impractical.

Maybe we should take that as an initial estimate and add a couple more digits for safecoin supply to future-proof it for a long time to come. Is there any disadvantage to having too big an initial supply?

1 Like

Yes and why it is good for these different perspectives.

(saying it slightly different if I may)

And if adopted globally of say 10 billion (10^10) users then it allows 0.00021 BTC (2.1x10^4) per person. For Fraser’s initial suggestion we get 4x10^8 parts of a safecoin per person.

I’d suggest that as a 1st approximation that 0.4 billion parts per person is good as that allows a lot of variability in holdings.

As I mentioned before in another topic - changing the division to a greater amount is easy in the core code. Just like IPv4 and IPv6 they can co-exist and the software simply upgrades the safecoin divisibility from version 1 (9 places) to version 2 (18 places) when any activity occurs on that coin account. Its in the API and coin account structure that controls which version it is and whenever the core code sees a version 1 account structure it rewrites it with the extra bytes to make it version 2. There is no loss of coinage since they are lesser decimal places being added.

So we could start with 9 places and migrate to 18 places if needed.

Of course, if using Fraser’s idea, we could also start with 10^12 whole coins to start with and crowdsale people get 1 MAID = (10^12/2^32) coin exchange rate.

5 Likes

I’ve understood that the chosen fixed upper limit of safecoins is chosen as 32bit register. What fraction of the 4,294,967,295 coins would be “active” in the sense of having an owner, and what amount staying in potential pool of available memory allocations is another interesting question for which I don’t know the answer.

Positive Integers representable by 32bit register having owner data repeatedly attached to them and removed from them over the “lifetime” of those integers is not much of a consolence for e.g. 5 billion unique human beings wishing to own at least one coin representable by a 32bit register. In that game of musical chairs 5,000,000,000 - 4,294,967,295 people will be always excluded from the possibility of having even a single 32bit integer representation in their possession.

When I was young one christmas we calculated with my cousin that the solution to the classic chess story of 64bit register would amount to enough of rice grains to turn Baltic Sea into nice rice porrige :slight_smile: . Integers representable by 64bit register is actually not so very different from 32bit register on logarithmic scale, which seems to be how we usually perceive quantifiable relations. The usually estimated amount of atoms in observable space is about 10^80, which is, say, a bit more… but still much less than theoretically pragmatically possible limits of encoding biggish and bigger numbers. :slight_smile:

I’m first to admit that my own mathematical handicaps are nothing short of spectacular and grow exponentially etc as we get to biggish and bigger and then some numbers, but I try to learn, here’s one nice introduction to the theme (and much more from the same guy starting about at wild math foundation 173):

What I’m trying to get to, in this age of information revolution, is that information is wider much wider and more (r)evolutionary concept than e.g. a fixed “pretty big” number of all available planck areas in all possible universes, which as mind bogling it is, boils down only to much more limited question of theoretical and pragmatic possiblities of mathematical representation of information. And just because mathematical representations and computations of information give much form and meaning also to our qualitative behaviour and experience, constant striving for more holistic comprehension of mathematical representation is no trivial matter. And as we strive to achieve more holistic comprehension of mathematical representations of information, and how to make math of information society rather our friend than tyrant, it is very clear that the magic is in dynamic algorithms and their inter-relations, not so much in fixed integer values.

It is of no small wonder how people who appear to me math geniuses, e.g. coders of new consensus algorithms, show much less algorithmic and dynamic comprehension when it comes to matters of economy and money. Or maybe that’s the glaring achievement of what they call “education” of economics, ie. hypnotically stupidifying indoctrination of underlings into eazily exploitables.

My working hypothesis is that programming mathematically sound (ie distributed and decentralized) economical systems is not much more complex task than developing better and better consensus algorithms for decentralized networks, and can be in many aspects much simpler, and in all complex systems simplicity and communicability has much inherent value. But neither can we afford to be naive about dynamic algorithmic aspects of mathematized economics.

I don’t expect SAFEnet to get the economical aspects of the whole equation “right” in the first go, or that there is a single, final “right”, but it would be nice if safecoin would not stay on the same level of naive as BTC and current general state of affairs in the cryptoscene, but strive for next step of path finding and evolution also in the aspect of innovative building of economical architecture, which is no less important part, but in many ways more challenging because of long history of conditioning by various religio-politico-ideological blinders which often hinder informative and innovative intelligent discussion of mathematical modelling and programming of actually practical economical systems. Of course I have carried and keep on carrying my share of these blinders. :slight_smile:

Not an issue with division which is planned. No need to ever own a whole coin.

What division is really planned? When the main plan is that data stays for ever, to the end of time, what ever that means, I accept that we take that plan seriously and do our best to plan also the mathematical structure of SAFEnet token lubricants according to the main plan.

It doesn’t really matter where the comma is put in the decimal string and what we in ordinary language call “whole coin” or “satoshi” or what ever. What matters is how much information a string of zeroes and ones can represent, and what kind of strings enable preserving data also when, possibly, evolving and extending SAFEnet into multigalactic civilization and what not. :stuck_out_tongue: :alien: :robot: :dolphin: :mouse2:

1 Like
2 Likes

I think one point is that more decimal places can be added later. So really it can accommodate growth for a long long time to come. Of course even without that, if the value appreciates because it has other uses not just storage and the cost of storage goes down dramatically because of technological innovation, then we have many levels of flexibility.

Great, thanks for the link. :slight_smile:

From first quick perusal, couple comments. According to @mav

Fraser Alternative

The ledger is like a set of pigeonholes that each represent a user account. Every pigeonhole has a lock that only the owner has a key for, and a piece of paper inside recording the number of coins owned by the user. There are no empty or unowned pigeonholes (but some may have zero balance). More participants means more pigeonholes to look after. Some pigeonholes have a big balance, some have a small balance, it’s just a number written on the piece of paper that lives in the pigeonhole. A transaction involves reducing the number on the paper in one pigeonhole, creating a new piece of paper with the amount to transact, sending that new paper to the recipient pigeonhole, then increasing the number on the recipient’s piece of paper. The transaction paper may be kept by the recipient or can be discarded by them any time. There are rules about how much the numbers can be changed so they never all add up to more than 2^32. The sum of the balance in all pigeonholes is stored in each block of the datachain for each section, but contains no identifiable or verifiable history. Responsibility for pigeonholes is split among vaults depending on the name of the owner of each pigeonhole.

If this description is true the Fraser Alternative is a dynamic fraction n/m (if I understand correctly) where n is some amount of safecoins and m is total number of user accounts. What doesn’t really add up is the added now seemingly arbitrary external condition that mn < 2^32 (compared to pigeon hole of totally 2^32 coins with or without owner data. Maybe there’s a misunderstanding somewhere? Does this implicate that the m number of pigeon holes can be any positive rational number and n content of pigeon hole can be any positive or negative rational number, as long as mn < 2^32, taking a step back towards combination of positive and negative money of central bank fiats from the only positive money content of most crypto accounts? But any case I fail to see how this upper limit makes sense, maybe I’m just too dumb.


As for the decimal system representations of what goes under the hood in binary base, I do agree that how the (preferably dynamic) binary fraction is presented to human psyche with standard conditioning is very imported, and in this respect a simple integer is better than a decimal string with an arbitrary comma somewhere, which does not add any meaningful information but just adds unnecessary complication to basic arithmetic operations e.g. when doing mental calculations or typing strings. However the translations between different bases are not a biggie and the real question is the algorithmic setup and process that goes on under the “hood”.

Account balance expressed as simple integer for a human user can be derived from complex dynamic algorithm for counting a systemically meaningful fraction that at least system-internally is stable enough and does not fluctuate too wildly in terms of human psyche, we can think of many ways to smoothen out the rate of change e.g. in number of coins in circulation : number of accounts : number of total pool of coins without permanently fixing any or those values as a constant. In chaotically fluctuating multicoin (crypto and old fiats etc) systems chaotically fluctuating in relation to each other naturally no such promises can be given, but very interesting theoretical question is whether a dynamic fraction could behave in relatively more stable and predictable manner in multicoin system than a rigid fraction with constant denominator.

Personally my psyche finds it most easy and natural to think in terms of Egyption fractions, with numerator as 1 and denominator as variable, and e.g. the content of a personal account as n*1/m, n just expressing how many coins I own at a given moment and m as some systemically meaningful and benevolently fluctuating (preferably steadily growing) positive integer.

I don’t understand the basic structure of SAFEnet to say anything definite, but it seems that the basic approach is far from a good match with “add a couple more decimal places” a la BTC blockchain.

In my current state of comprehension the basic conundrum, initially from the viewpoint of a single user, is to meaningfully differentiate and express the ratio of resources offered (amount of data storage modified with bandwith etc) and resources used. First problem is that the ratio is fundamentally asynchronous, amount of resources offered is mostly synchronous discreet value, but as planned, the amount of resources used is diachronic “ad infinitum” non-discreet value.

AFAIK even the most basic safecoin algorithms to express eveb the local node ratio of resources offered and used have not been drafted, not to mention algorithms for coming up with consensus estimates of global network ratio.

I may be totally wrong, but as far as I can think at the moment, for a user node to be able to inform e.g. the size of hard disk space given and the amount of global storage used by what amount of e.g. audiovisual data the user uploads/puts with some degree of accuracy, it would seem that the global amount of safecoins needed to informatively differentiate would be much much closer to e.g. n x 2^64 x total of user accounts than measly 2^32. In other words, it would seem that for long term sustainability of the network all users would need to start as at least as multibillionaires.

Maybe there are smarter ways to express ratio of resources given and used by a node and I sincerely hope so. Maybe using both negative and positive numbers (or more generally numbers and antinumbers which cancel each other in total) offers a partial solution to expressing the ratio of producing and consuming resources, and maybe that’s what the XOR refers to. But I still can’t see how any fixed upper limit of accounting unit lubricant could do the job in long term, in the way that SAFEnet has been dreamed.

I think you might be wrong here. This really doesn’t make sense in the current proposed system for resource costs and farming. It would be a lot simpler than it seems you have worked it out to be.

XOR is the addressing scheme rather than simple linear addressing.

Division is simply to allow more people to have an amount of coin and to allow the fiat cost to rise and yet people can get an appropriate amount of coin. Without division they would need one coin minimum.

The coin spent buys an amount of 2^64 “PUT” balance and with division the purchase of “PUT” balance could be with a much smaller amount for a corresponding “PUT” balance. When they do a “PUT” the current put cost is subtracted from that balance. The “PUT” cost is dynamically calculated for the section and is what is charged when the data is “PUT” in that section. An interesting fact is that in theory the “PUT” balance allows there to be a coin’s worth of "PUT"s paid for but not used and the coin amount is destroyed thus meaning that if no GETs are done then all the coins could be spent and yet there is still plenty of PUTs that can be done because of all those PUT balances. When coin amount is destroyed then the balance the network can pay rewards from increases.

Since the vaults are using spare resources there is no need to, although some will, specially buy equipment & disks to run the vaults.

But I cannot see how this straight forward system would require people to be multi billionaires from the start. This to me indicates some fundamental misunderstanding of how it is proposed to work and is perhaps being read into any discussions on the matter.

1 Like