RFC 57: Safecoin Revised

Just a random thought about biases:

farmers are interested in higher payout;
users are interested in lower payout;
but developers are perhaps most interested in the health of the network.

So, would it be possible to poll Safe Network developers for info on farming costs? Or would that influence PtD as well - i.e. is PtD some fixed percent of farmer payout?

1 Like

The developer rewards negate this supposition
They will want good returns too and cheats will want maximum and probably more cheaters than legit app developers since they will sock account their cheats a lot.

1 Like

I have mentioned this before and it sounds like a problem well suited for system dynamics modelling via tools like Misky or similar, especially when more “almost real” Safecoin related data starts to come in from beta network tests.

1 Like

Should the network be trying to maximise growth, or enable optimal pricing to balance supply & demand? Why would having supply / demand out of balance improve growth prospects?

I guess subsidising farmers temporarily during an expected short term drop in demand could prevent a few farmers leaving the network, making the network more able to respond smoothly if demand picks up again.

Though it will mean the network isn’t able to find equilibrium as quickly in the event of any longer term drop in demand.

I think it’d be viable to allow pricing to shift even with short term spikes / drops in demand or supply, but just make sure the price isn’t not overly sensitive to them, which it shouldn’t be as short term spikes are not going to represent a big portion of overall network capacity (which will determine price) except for when the network is very small.

Why do you think this might be beneficial or necessary, and how would the network know if any change in supply or demand is likely to be transient rather than sustained?

2 Likes

Yes, the network must grow faster than it decays to survive. How do you define optimal pricing? I would consider optimal pricing to be that which ensures continuity of perpetual data and the perpetual web. Only the network will be able to determine which price that is at any moment.

It’s not necessarily a “subsidy”. The network just temporarily determined that it is worth paying more for GETs than it charges for PUTs to ensure it’s own survival and a bright future for itself.

Because I am including the network as a market player with it’s own self interest (survival, growth) and contribution to supply and demand. When I say beneficial, I am referring to network performance and sustainable growth to guarantee perpetual data in perpetuity. Transients are identified by rates of change, and higher derivatives there of, relative to past or current conditions. However, if the control algorithms are really effective, there shouldn’t be any significant transient spikes. Minimizing those is a main objective of a good control algorithm.

1 Like

I had a brief look but nothing too deep, looks really fascinating. Have you used this software? Would you be willing to have a go at creating a model for safecoin? Even something absurdly basic would still be fascinating to see.

I would have thought farmers would know their current and projected costs (and thus necessary price) better than the network?

The network issuance of coin plays a huge role in determining/setting the price itself through it’s bias toward either creation or destruction. I think it’s unfortunate that the network needs to manage this at all and that it’s not simply a market mechanism between storage users and storage suppliers. But it seems baked in the cake at this point.

What we have is a giant buffer of network ‘coins’ and the devs are saddled with determining the bias of the network - is it to issue to the point of only keeping a few held in reserve or is it to hold as many back as possible to reduce monetary inflation (not to be confused with price inflation).

These buffer safecoin are not purely like the bitcoin protocol wherein it acts as coin origination - as coins will exist from the start + coins will be recycled regardless of the buffer.

I’m rambling here and possibly am I just being clueless and missing something key? So, let me just ask the question: What is the bias of the community? Push out, over time most all the coins in the network purse, OR just use them as a buffer and try to not use them at all?

I feel like we aren’t in agreement with the underlying philosophy of the purpose of the buffer/coin-pool - I suspect some of us aren’t even thinking about it … so that’s why I’m going back to the beginning with this question.

2 Likes

Which price are you talking about? Fiat or SAFE? I usually only consider SAFE in these discussions since Fiat is external to the SAFE ecosystem. Fiat information is indirectly transferred to SAFE via section sizes and rates of farmer entry or exit, or other health metrics. The farmers are SAFE’s first defense/buffer against speculative Fiat market movers. Some farmers will know their current and projected costs and not care if they lose money now since they are hoping for a future reward as we’ve seen with BTC miners. Some will infer/detect future issues and leave the network prematurely. SAFE only needs to care about actual changes in network hardware availability, not Fiat to SAFE exchange rates.

IMO this loose coupling is genius and one of the best features of the SAFE market model.

Nope. You get it.

They should be pushed out or pulled in completely as needed based on network conditions with a target “setpoint” of 50% ownership by the network. In other words there could be times during extreme events when the network has the option to spend all its coins for resources, and other times when it has the option to save all the coins it receives. These are extreme edge cases that may rarely if ever occur (ex: a hundred year flood) but represent the allowable range of operation and max/min constraints. The network should always “seek” to own 50% over the long term. This represents an average network “greediness”, minimizes whale manipulation, and allows for flexible adaptation without resorting to extreme levels of divisibility or extreme changes in PUT prices. However, it is reasonable to assume that PUT costs should also be pushed to extreme levels during such crises as another control mechanism.

A lot of people might not be aware of the opportunities/possibilities offered by the SAFE market model vs. the BTC/blockchain predestined coin issuance rate model. It’s more or less self-evident depending on your background. With SAFE it is easy to see ways for the network to self-regulate ie. stimulate, contract, or adapt it’s own growth and performance in response to external stimuli and operating conditions using the pricing and reward algorithms. It is an autonomous system.

In the Perpetual Auction thread (and before) @mav has started asking the right questions with regard to the hard numbers we need to define in order to build the necessary control parameters/algorithms for this autonomous system.

2 Likes

I agree. I expect this can be achieved through having a target level of available resource, and pricing that keeps the network close to this target.

The farmers who provide resources and users who pay Safecoin to upload are the only participants who can determine what resources they provide or consume at any price level. The network needs to continually discover the correct pricing to balance supply & demand to achieve a target level of resource availability.

It may be semantics, but perhaps there’s an important conceptual difference seeing the network as ‘discovering’ pricing, rather than ‘determining’ pricing.

Can you suggest any situation where the performance of the network would be threatened by letting an effective market operate between resource suppliers and resource consumers with the aim of balancing available resources with available demand?

Why would the network benefit from being a market player, rather than facilitator? What plays would it make away from equilibrium that benefit it?

Paying farmers above the market rate would only lead to an over-supply of resources, and vice versa.

Charging below market rate for PUTs would only lead to consumption outstripping supply, and vice versa.

I agree that significant transient spikes should be easy to avoid through the pricing mechanism design, except perhaps when the network is very small (which shouldn’t really matter - it’ll be fun!).

1 Like

Yes, I think it’s mostly semantics/language. However, in the process of optimal price ‘discovery’, the network must first ‘determine’ a dynamic price and read the response of GETs and PUTs, then it can correct this price to ‘determine’ a new direction. It’s a prediction -> correction type scheme. The farmers and users are also reacting to the price change in addition to externalities so it is a fully coupled system.

Only if the system approaches a static/steady state. This is obviously not a preferred situation. The ebb and flow of the system dynamics alter this outcome.

Resource suppliers get greedy, and uploaders get stingy. In a scenario where the lowest ask price for resources is higher than the highest offer price, you have an unsustainable stalemate and the network suffers.

The network must always grow and never contract, this is a ‘demand’ on the system you are not accounting for.

2 Likes

That sounds good. Before making a price adjustment, the network will know the direction the increment needs to go in if it’s comparing available resources to a target level, and it needs to know an appropriate increment / decrement in pricing based on the amount of deviation from the target.

Obviously the further away the network is from the target, the larger the increment / decrement in pricing will need to be. Perhaps it needs to be in terms of a percentage of the current price to remain relevant at all price levels.

Relating to this, I’d be interested in hearing thoughts / answers to this question from Mav:

If resource suppliers get greedy / uploaders stingy, it’s going to need to affect the price. The network can’t do anything about it.

It seems very unlikely that farmers would suddenly all demand higher compensation, or users would suddenly decrease their demand for uploading… but even if they did, the network would need to adapt - it can’t magically operate with a long-term difference between supply and demand. Perhaps it’d be better for price signals to get through directly rather than be hidden?

I doubt there’s ever a minute of a day where nobody uploads data to Google Drive. While the pricing is different, I expect the Safe Network will have a similar constant flow of data being uploaded to it, and farmers bringing online more capacity. Given this, I can’t imagine any unsustainable stalemate as you described occurring, unless the network somehow messed up the pricing and made uploading too expensive, and farming not rewarding enough. If the network’s job is to balance supply and demand, this never needs to happen.

I’m assuming a flow of data will constantly be added to the network. If the network can’t delete data, this means there is always growth, and not contraction in the quantity of data stored.

I’m sure that the rate of data / network growth will fluctuate over time based on the speed of adoption. Balancing this out with the correct incentives to ensure farmers are providing sufficient resource is the job of the pricing mechanism.

If people stop uploading to the network (network stops growing), it’s dead. In this situation, increasing payments to farmers of a coin that pays for resources nobody wants wouldn’t be much help.

1 Like

Will sections create coins ‘from thin air’ for rewards? Or will the first section ‘own’ all 2^32 coins in a shared wallet from which rewards are paid (and evenly split for section splits)?

Reason I ask is it seems easier to do accountability if all coins are existing from the start, ie ‘network owned’ coins are actually owned by all elders and are only used for paying rewards.

If coins are created from thin air for rewards, it seems harder to argue about incorrect rewards being issued, but if they come from a shared section wallet there should be some accountability.

Am I addressing a non-issue or can you see this as being something to consider further?

RFC-0057 is not too clear on the details of creating new coins. This is the most relevant info I could find:

Each section’s Elders will maintain a record farmed (of type Coins ) of the amount of safecoin farmed at that section.
The section’s farmed value will never be allowed to exceed the amount of coins for which that section is responsible.

The Elder group will be responsible for managing all aspects of farming within their section. This will include among other things:
sending Credits to these CoinBalances when a farming attempt is successful, and increasing the farmed total for the section

Seems natural to me to use a section wallet for all unfarmed coins and “sending Credits CoinTransfers to these CoinBalance s when a farming attempt is successful”

4 Likes

I would argue that it is a play on words. The figure that a section can create could be a figure it can create out of thin air or reduce from its coin balance.

In the end its the same thing because they are doing the accountability requirement.

4 Likes

It’s not because in rfc-0057 there’s a need to be able to verify CoinTransfer for ‘normal’ p2p safecoin transfers, and also verify (in a second different way) Credit for newly rewarded coins. If sections had a wallet then everything would just be the same CoinTransfer (and implementation would also be simpler). Why not just use CoinTransfer for everything?

edit: for it being the same level of accountability in both situations, maybe you’re right. But having a source wallet rather than just ‘from the section’ seems more accountable to me. Maybe it’s not.

3 Likes

I don’t think so. There are multiple reasons to farm and they aren’t all related to earning safecoin - you’ve stated that but not following the reasoning though? It seems to me that this indirect information will give a over-par value for safecoin (by the network) and if we don’t query farmers directly to gain further information then larger commercial farms may be put off joining the network - of course they may also assume that the coin will be worth more later on and hence farm anyway if they have spare resources (as you say) - but this just affirms that the network will overvalue safecoin relative to market value. Does it not?

IMO, it’s hubris to assume genius at this stage. Evidence and reason … and we don’t have evidence yet.

You are being charitable … I’m not convinced. :wink:

Network conditions will be changing all the time, yet the number of coins we are referring to are in the range of 10x? the total coins at start. If the network is push-pulling that kind of amount during fleeting market changes via speculative moves and manipulation then the network itself will be played to exacerbate this push-pull effect for profit or to attack and damage the network -example further below.

IMO, no matter the philosophy here - too push the majority out or to hodl tightly indefinitely, the one thing the network shouldn’t do is push and pull hard on the reigns … the problem is the network has no sense of time without getting information from traders, hence how does it know how fast or slow to push or pull and hence how can it refrain from causing major market swings in price? Because anything set in advance in regard to total storage capacity and put rate etc seems arbitrary and hence not really adaptive - it isn’t an AI, just a ‘smart’ system and can be gamed. E.g. the NSA uses Amazon cloud for x-days to turn on massive storage for the network, then shuts it off for x-days; rinse wash repeat … how does the network deal with that sort of attack without outside information to stabilize the issuance so that real commercial farmers don’t quit the network?

IMO, all of this reminds me of the “Economic calculation problem” as no matter how we ‘plan’ this out - without direct market information that includes both value and time the network is left to speculate on both and I suspect that it will be manipulated for profit.

Makes sense to me if it’s more efficient code-wise – also easier for audit of the network with regards to the issuance of coins maybe?

KISS is always best unless there is a strong reason to increase complexity.

1 Like

yeah, but that’s just the processing power not the number of devices. that’s like saying the processing power of smartphones doubles every 2 years, so everyone is using 2 phones in 2 years, 4 in 4 years, …


Are you sure about that? In Parsec there can be multiple “actions” running concurrently. So by increasing the txes/s you would increase the number of in-flight-actions. the time a tx needs to get finalized should be near constant (if enough cpu power is provided to the network). the interesting part is how the overhead scales up by having more in-flight txes, it’s definitely not linear… hmm, i should read the parsec paper in detail…


Im not into IoT and i still have voted for u128, because it’s future-proof. at nearly no additional cost (the overhead of u64 - 8 bytes over u128 - 16 bytes is negligible). also we could just use 9 places (for now), and store it in an u128 for implementation simplicity.

2 Likes

Yes but I would argue that the current processors are more than capable to run the maximum concurrent transactions now. So we are limited by lag.

In any case a change to the consensus mechanism (eg increase concurrency) is outside of my statement. My statement had the implied constraint of a particular consensus system/version

I voted “no choice” so that we could see what happens.

I would though hope that we have these same polls perhaps after the first version of safecoin has had time for analysis. Unless of course the devs take up the discussions themselves.

As you say it is really only a variable sizing decision and the minimum unit could be at 96 bits (2^32 coins with 18 decimal places) as this would be practical infinite division. And the other 32 bits for nothing or maybe flags.

1 Like

This is the only way that is reasonable to me. All coins exist from the start. Depending on how how many X initial sections you seed the network with, the 2^32 is broken up evenly into X section purses. Each time the section splits, the purse is divided by 2, with the integer remainder of nano-safe or nano-pico-safe going randomly to one of the child sections. I’ve often thought that one could get better security if we hybridize the current RFC with the original Safecoin idea and have 2^32 safecoin addresses/wallets/purses each holding 1 SC to start. These data objects would naturally transfer along with section splits based on data locality. You could still have divisibility etc. and keep nearly all the current code the same; the only difference is the section wallet/purse used to payout GET rewards or receive PUT fees would be randomly selected from one of the safecoin addresses in the section. However, I can understand why the simplicity of a single section purse is currently preferred by the team.

Never said they were. It’s just that all externalizes are lumped into a single objective measurement - are farmers joining, staying, or leaving, and at what rates.

The point I have been trying to make is that with a little forethought about design decisions the network can be programmed to optimally push or pull depending on the situation.

Not true. ‘Time’ is any parameter used to signify an ordered series of events. An event is any thing that changes the state of a system. One obvious option for the tick tock of a clock is a PUT, since it literally changes the data in the network, but that alone is insufficient because we would like to view PUTs and GETs relative to some other independant measure. Gossip events within Parsec are too noisy and should be considered subscale activity. The number of Parsec consensus events within a section would seem to be the best candidate. For simplicity, I think it’s reasonable to consider any single PUT or GET as an “event” to signify a network change of state and thus increment a network/section time counter, but I am unsure if it is planned for PARSEC to track GETs. Regardless, the ability to define PUT rates and GET rates relative to a well defined and consistent network “time” is not infeasible.

2 Likes

Unfortunately I do not currently have the skill set (or the time to acquire it), but quite a few people here who are grappling in depth with trying to model a higher order stable Safe Network may be better positioned to do so.

I would just add how well this problem is suited to complexity modeling. I highly encourage a quick read of Why Economists Have to Embrace Complexity to Avoid Disaster by Aussie professor Steve Keen, the brains behind the Minsky software. A quote (of a quote):

The discovery that higher order phenomena cannot be directly extrapolated from lower order systems is a commonplace conclusion in genuine sciences today: it’s known as the “emergence” issue in complex systems (Nicolis and Prigogine, 1971, Ramos-Martin, 2003). The dominant characteristics of a complex system come from the interactions between its entities, rather than from the properties of a single entity considered in isolation.

We want to reach our goal of a highly stable self regulating higher order Safe Network ecosystem based on simple interactions (Safe Network space, coin issuance push/pull supply demand control parameters/algorithms). As Keen’s article highlights from complexity modeling, in an economic system like the Safe Network it is perfectly reasonable for storage space to become very scarce at the same time that bidding prices fall significantly, and that "pushing and “pulling” supply falsely assumes the ecosystem will be fluctuating around a stable equilibrium.

The failure of economics to develop anything like the same capacity [of accurate weather forecasting] is partly because the economy is far less predictable than the weather, given human agency, as Hayekian economists justifiably argue. But it is also due to the insistence of mainstream economists on the false modelling strategies of deriving macroeconomics by extrapolation from microeconomics, and of assuming that the economy is a stable system that always returns to equilibrium after a disturbance.

[edit] I will just end on a quote that possibly shows the way forward to designing a complex stable Safe Network ecosystem:

to derive a decent macroeconomics, we have to start at the level of the macroeconomy itself . This is the approach of complex systems theorists: to work from the structure of the system they are analysing, since this structure, properly laid out, will contain the interactions between the system’s entities that give it its dominant characteristics.

5 Likes

Not sure if you with ”bidding prices” refer to the perpetual auction system. In case you do, I would say it is a misunderstanding of how that system works. The bids do not work as ”offers” in a supply/demand market. The way I read the article (I’m open to input on different interpretations), PAC doesn’t seem like the closest association, on the contrary actually. Rather, the closest association I would say is the resource-as-a-proxy algorithm, as RFC-0012 and RFC-0057 works, since it very simplistically increases or decreases reward/store cost as a response to supply – regardless of the actual circumstances. The actual circumstances might be that decreasing reward/store cost have the opposite of the desired effect:

Since changes in relative prices change the distribution of income, and therefore the distribution of demand between different markets, demand for a good may fall when its price falls, because the price fall reduces the income of its customers more than the lower relative price boosts demand (I give a simple illustration of this in Keen, 2011 on pages 51-53).

This argument from my side, takes into account that vault operators, or their subsidiaries, employees, contractors etc., might also be dependent on the usage of the network. So actors on the network, who have an income based on the network are part of the economy as consumers as well, so price fluctuations do not influence demand in a trivial manner, since they reflect also on income distribution.

This is entirely in line with what @mav and I have been saying. We cannot assume that this simplistic supply/demand approach actually corresponds to what stimuli the macroeconomic movements at that point really need. It portrays itself as a very simple solution when it disregards any complexity, but these things are not simple. Resource-as-a-proxy, as RFC-0012 and RFC-0057 works, tries to map individual consumer behaviour on to large scale economy.

The discovery that higher order phenomena cannot be directly extrapolated from lower order systems is a commonplace conclusion in genuine sciences today: it’s known as the “emergence” issue in complex systems (Nicolis and Prigogine, 1971, Ramos-Martin, 2003). The dominant characteristics of a complex system come from the interactions between its entities, rather than from the properties of a single entity considered in isolation.

The entire article sums up to: complexity must be embraced.

This is the rationale for PAC, described in the original and following posts in that topic.

PAC bidding (which is not the same thing as an ”offer” in a supply/demand market), is fundamentally decoupled from the supply (in terms of network implemented algorithms), and expresses the crowd knowledge of these emergent macroeconomic effects. This is not something resource-as-a-proxy á la RFC-0012 / RFC-0057 can do.

There are some dead to the point formulations and citations in that article:

The fallacy in the belief that higher level phenomena (like macroeconomics) had to be, or even could be, derived from lower level phenomena (like microeconomics) was pointed out clearly in 1972—again, before Lucas wrote—by the Physics Nobel Laureate Philip Anderson:

The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a “constructionist” one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe . (Anderson, 1972, p. 393. Emphasis added)

He specifically rejected the approach of extrapolating from the “micro” to the “macro” within physics. If this rejection applies to the behaviour of fundamental particles, how much more so does it apply to the behaviour of people?:

The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other. (Anderson, 1972 , p. 393)


Now, we are still exploring more ways to build an economy other than the perpetual auction, since some part of the SAFENetwork community have so far showed reluctance to accept this inherent complexity in the economy, and still clings to the previous simplistic approach – many times with the argument of it being simpler. I do not believe we have yet seen the simplest possible idea that does what it is supposed to do.

As I’ve said before, I am at core a fierce proponent of keeping it simple, but…

”Everything should be made as simple as possible, but not simpler .”

What we’ve been arguing, and what I say this article supports, is that this economy is not simple. And reading deeper, I would say the article also supports the notion that it’s certainly not as simple as just adjusting reward/store cost up or down in response to supply.

There may be other ways than the PAC way of sampling the user population for assessment of the current complex situation, but currently it seems obvious to me, that this is so far the closest thing we have to something that does not extrapolate a macroeconomics system based on microeconomics theory.


@krnelson, I would be interested to hear your view on what the above would look like in the design of SAFENetwork economy, with my and your own interpretation of what the article is in essence saying about SAFENetwork economy (in case it seems to you that those differ).

4 Likes