PUT price directly controlled by supply and demand


#1

I saw these other threads about PUT price:

Common problem is thinking it’s gameable. And what’s with that assumption that network will try to set PUT price low? The network is run by the vaults. Can we trust they would try to make less money?

Instead, vaults should try to maximize profits.

The correct PUT price is the price when the vault can sell its space for the most coins:

PUT frequency x price => MAXIMUM POSSIBLE AMOUNT

Not gameable: Who would try to make less than possible?

A simple method, maybe works:

  1. Set minimum price to match marginal cost of operating vault at current exchange rate.
  2. Set a target period for filling vault: 1 month, for example.
  3. Measure average PUT frequency with shorter period: 1-2 day, for example.
  4. Raise price if storage would take shorter to fill than target period.
  5. Lower price if storage would take longer to fill than target period.

Note vaults never get full: Target period is the carrot for the donkey: Always in the future. As vaults get full, price approach infinity. As more storage added, price gets lower.

As more data is collected, method can be refined to find most profitable price faster.

Vaults in a group can agree on correct price by taking age weighted median of votes. So: Outliers and bad vaults don’t affect the price at all. Different price setting methods can work together too, but vaults would be smart to switch to best method. Because: More money.

(This question is just one part of something I’m working on. Just for this conversation, please ignore farming rate, rewards, and so on.)

What do you think?


More Economics (15)
#2

I cant say i have read those 2 threads.

But.

You ( your vault ), does not decide on a storage price.

If you join the network you store data, and at some point you should receive safecoin as payment.

The price to store and the farming payment made is to be decided by both the network algorithm and the open market price.

Edit.
As to ignoring farming rate, rewards, and so on.

You cant really explain how it will work without utilising the parts that make it work.


#3

What is published?

The store cost is currently defined in RFC-0012 Safecoin Implementation

StoreCost = FR * NC / GROUP_SIZE

This is deterministic for an agreed farmrate (FR should be deterministic per section, see establishing farming rate). NC… hmmm… that’s a different topic.

If a vault uses the wrong StoreCost they’re punished by the section. Seems simple enough to me.

If my specific vault isn’t profitable under those conditions, I need to turn it off or upgrade it to become profitable.

Is it possible that nobody is profitable under those conditions? I’m not sure yet.

My own thoughts

I think the general idea of ‘market price for storage’ is fascinating. Does it mean users should ‘shop around’ different sections for the best price before converting their safecoin to PUT balance? Does it mean users should buy when it’s cheap even if they already have plenty of PUT balance? Does longer delays between buying and consuming PUT create unusual mechanics in the safecoin reward mechanism?

Vaults don’t know what they’re missing out on by setting high prices. If they set their price to ‘luxury levels’ then they don’t know if operating at ‘peasant levels’ would end up earning more income. It seems for the sake of having maximum information, defaulting to ‘peasant levels’ is the best strategy (even if no pricing decision comes from it). Does it cost too much to operate like that? Maybe??

Storing chunks from PUT is a cost to my vault, serving GETs is income to my vault, and profit is the difference between the two (throw in volatile exchange rates for extra difficulty). But a higher PUT price set by my vault does not mean proportionally more income to my vault. Why should I care if a client buys their PUT balance from my vault rather than some other vault? I get no direct benefit from that purchase. I just care that clients do buy, since that recycles coins and increases the total pool of funds available for my vault to farm.

The described method is interesting but the disconnect between buying PUT in one section vs actually storing chunks in other sections causes me some conceptual difficulties. Maybe it just means the network converges on a price much better…?

Overall it’s a really interesting topic… I’ve been trying to grok the safecoin algorithm for a long time now but it’s not crystalline to me yet. Looking forward to more discussion on this topic.


#4

Yes. This is what I’m trying to improve. This is just the first step.

Absolutely not. Storage is selected determinstically. They can’t change that. If they could, they couldn’t get better price still. Because: Read on :slight_smile:

This method is based on the assumption that vaults fill up evenly across the network. This is reasonable because the XOR algorithm distributes storage evenly. We can expect that all vaults have about the same % storage filled. So: They all arrive at a similar decision about the correct price. Vaults could learn from each other’s votes about the correct price too.

Important: Exact method of setting price is less important than the goal: “Set the most profitable price.” Only this goal is not gameable because it’s a selfish goal. Game it and you lose. Improve upon it and everybody wins.

The users win too! Because: Nobody can enjoy a network that either: a) Too expensive. b) Ran out of storage. If users think the price is high, they may think, it’s profitable to start a vault! They make money, network gains storage, price go lower. It’s a feedback loop. Note: Original idea is based on similar. I just add more economics.

Note young vaults that have no time to get filled yet are no problem for several reasons:

  • They don’t count much because votes are age-weighted median.
  • They can vote based on their eventual fill % or refrain from voting until they know.
  • They don’t have an incentive to try to game the system even if they could.

Not sure if you noticed: Prices can go up and down. Vaults try to keep the time of filling up storage at a set distance in the future. In the example: 1 month. If PUTs arrive too fast, price need to increase. If PUTs arrive too slow, price need to decrease.

This must works slowly because need time to let change of price affect user behavior. 1 month may be too short. But too long too. Experience will tell, and vaults can experiment with it. Best period is where profit can be maximized best. Remember: Outlier votes can’t hurt because they are ignored. May be punished by other vaults in section.

When the network starts, vaults are empty. Price is set at marginal cost of operating a vault. If no PUTs arrive at that price, operating vault is impossible without loss.

But: As I said, we get same information at all time: How long will vault fill up at current price. Too short or too long: higher or lower price necessary.

It’s distributed price discovery. It does not bring immediate income but by voting for the right price the vault: Maximizes the income for the network.

  • Vaults can’t misbehave to make more money: They don’t vote for money for themselves, so immediate gain is impossible. Voting high or low trickles down. The goal is to maximize profits: The more selfish they act, the better they follow correct protocol.
  • Vaults can’t misbehave without hurting themselves: Voting low in hope of more GETs is hurting the vault itself because “price too low” is defined as “price that will stop the network from working because storage will get exhausted”. Voting too high reduces demand too much: less money.

#5

Vaults in a section decide on the price together.
That affects payment the vaults receive.
Not right then. Indirectly.

Two things must reconciled so it’s not gameable:

  • What’s good for network? Keep it working.
  • What’s good for individual? Make more money.

Solution: Set price high enough to not fill up storage, but low enough that people are willing to pay for storage.

I can ask to focus on one part.
Maybe I already considered the rest.
I don’t want to go there before asking opinion on this part.


#6

First part about filling up evenly is probably true. Second part about same % storage filled is not. The amount of spare space for each vault will probably vary a lot. Not that it necessarily matters for the rest of the idea, just wanted to point this out. (It does matter for calculating farmrate, which is offtopic for this thread).

The third part, I don’t quite understand. How do vaults in a section ‘aggregate’ the different individual vault pricings? I understand and like the individual pricing mechanism but I don’t understand how that is used to form a final price for the user.

“Exact method of setting price is less important than the goal” is a good heuristic for this discussion and I agree in the context of this thread, but implementation details do matter.

It feels like something would happen similar to ‘pool hopping’ in bitcoin - large rapid oscillations in hash power depending on which pool or coin is most profitable at any point in time. I reckon there would be price oscillations if users can pick their periodicity etc, since it’d be optimum to price gouge in busy times. Just a hunch.

My main question is how the individual pricing mechanism is converted to a final ‘section price’ or ‘network price’ for the end user. There’s a mention of voting but I don’t get how that actually works. Can you give some more detail about the mechanism to convert from the individual vault price into a price for the end user?


#7
  1. Farmers don’t care about the PUT price, they care about the GET price.
  2. Clients don’t care about the GET price, they care about the PUT price.
  3. The Network cares about the size of the supply/resevoir of chunks that is being offered by the Farmers, the demand/load of data chunks being consumed by the Clients, and the amount of safecoin / buffer available to itself for facilitating the data chunk exchange by selling PUTs to the client and buying GETs from the farmer.

The past discussions I’ve read were more casual, simply stating that the network will control the GET prices and PUT prices in order to a) maximize the health of the network while b) minimizing the costs for clients and maximizing the rewards to farmers in an automated fashion without human intervention. I think your basic idea supports that philosophy with regard to maximizing the reward to farmers and it is good to brainstorm more about this.

Having an algorithm that ensures Farmer participation in SAFE by providing them with maximal safecoin-flow under variable network conditions is important as you pointed out. However, PUT prices and GET prices are indirectly coupled subject to the amount of safecoin available to the network (PUT Income Rate + Initial Buffer). In order for the network to be healthy early on it needs to grow, so the algorithm may decide to keep PUT costs lower than GET rewards in order to facilitate user uploads/participation and pay out from it’s “Initial Buffer” like an analogy to “startup funds” for any new venture. The initial buffer will shrink over time and approach some equilibrium, perhaps close to zero, but I guestimate (pure conjecture on my part) that testsafecoin will show it is important to keep the buffer target at some non-zero level (~20% to 51%) in order to account for uncertainty, minimize volatility, and protect against attempts from adversarial whales who would try to attack the network economics.

Not true. You forgot about the Network’s initial purse. Although, maybe I’m not understanding you correctly.
Not sure if you mean safecoin loss or fiat loss with respect to the Farmer, or Network paying out from its purse.

Yes, the time window will be interesting to experiment with. Inherent nonlinearity aside, I’ve wondered how well simple PID controls / PID Controller will fair. The wikipedia article for Optimal Control theory uses an example that is pertinent to this thread.

Yes.

No. Assuming vaults are fixed size (I think they should be), my understanding is that older vaults will be more filled than younger vaults, assuming that the entire 256bit or 512bit XOR space cannot be mapped 1:1 to existing storage chunks on launch (which it can’t). XOR space is a big tree (ie. sparse matrix) not a dense array.

Yes. Although I think this has been the original idea / philosophy from the start. It’s good to explicitly define these things though. Beyond safecoin concerns, this is also a good read if you haven’t gone through it yet describing vault managers and client managers and such. It probably needs a slight update here and there, but a good read nonetheless.

https://blog.maidsafe.net/category/technical/vault/

Yes.


#8

It’s actually as easy as follows. The more plentiful the accessible space is, the less Safecoin you need to upload the same amount of data. The more data you can afford for one Safecoin, the more valuable the Safecoin is. The price of one Safecoin can be ridiculously high while still providing the best price of storage on the market.

The other scenario is of course that there is scarce free space on the network with high fiat price for upload. Price of one Safecoin will be low cause uncompetitiveness of the SAFE Network. At the same time, vaults get a generous amount of newly created Safecoins - dilution (inflation). This situation is a gold mine for vaults. Even though the price of Safecoin in fiat terms is low, vaults get so many Safecoins that in fiat terms, they are intensively motivated to invest into expanding hardware. In a short period of time, high profits in farming attract more vaults while increasing the available space. The price of upload in fiat terms goes to reasonable levels and the price of Safecoin goes the other way. With extensive free space in the network, the dilution (inflation) stops or even reverse (deflation) with uploaders using more coins than vaults are able to earn.

TLDR.: The current system is self-regulating and has a nice price discovery included. IMHO, it will provide much more stable price than most crypto today.


#9

Nice explanation. Just the claim about price stability is hard to prove. I do believe that if network works without major bugs and all the economy works like you described, that the price of coins will explode. Safe coin will have so many ways of usage than. Imagine there were an Internet money for the old Internet with all the features of Safe coin. I am pretty sure that paying for hardware usage would be less than 0.01% of the whole Internet economy. In my view Safe coin will be worth either 0 or thousands of $.


#10

When discussing price stability with respect to safecoin, one needs to be specific with respect to what they are referring to. I usually refer to PUT and GET prices denominated in safecoin because that is all the network algorithms deal with when it comes to storing and retrieving data.


#11

I wanted to do some more exploration of the existing PUT price, so here it is


Summary

I’m going to bias this by saying my conclusion first - I have doubts the safecoin algorithm will work as intended with the current design.

There’s a lot of good design in the algorithm, and I’m not suggesting it be totally thrown away, but it’s hard to say whether it leads to a viable and sustainable economic model for the network.

Defining Store Cost

Store Cost is currently defined in RFC-0012 Safecoin Implementation as

StoreCost = FR * NC / GROUP_SIZE

which is a value measured in safecoins per chunk.

Farm Rate (FR) is between 0 and 1. A lower value means it’s harder to farm safecoin (ie the network has many spare resources).

Number of clients (NC) - presumably per section - let’s just leave it fixed for now at 1000 since over any short period of time this number will be pretty stable.

Group Size is 8

So the simplified formula for store cost in this explanatory post becomes

StoreCost = FR * 125 safecoins per chunk

There’s also a disclaimer in the rfc: “The calculation therefore becomes a simple one (for version 1.0)” - which indicates this is likely to change.

Possible Costs

As more resources become available (FR goes down) the cost to store goes down. That sounds like a reasonable relationship. And vice versa - less resources available means the cost goes up.

What is a reasonable range of values for storecost?

The first ever PUT on the network will be FR=1 and NC=1 so that means a cost of 0.125 safecoin per chunk or 8 chunks per safecoin.

But let’s look at when the network is in use and NC = 1000.

One limit is where resources are scarce and farming has become so easy that every farm attempt is successful, ie FR = 1

In this case, store cost is 125 safecoin per chunk, which is quite expensive! Scarce resources = expensive storage. Nice.

Now consider when farming is popular there are many spare resources. The reward has become one million times more difficult to get, ie FR = 1 / 1,000,000.

In this case, store cost is 0.000125 safecoin per chunk or 8000 chunks per safecoin. That seems like pretty good value.

Consider if farming is extremely popular, a billion times more difficult to get, ie FR = 1 / 1,000,000,000

In this case, store cost works out to being 8M chunks per safecoin.

I think the general direction of safecoin is correct, but the magnitudes are hard to come to terms with. Will it result in an economically viable and sustainable system? It’s hard to say.

Limits

What’s the expected bound on farm rate? How difficult will farming become? Farmers naturally want it to be most profitable so they’ll aim to keep FR high (ie not much spare resources). But it’s balanced by not wanting to be punished for having too few resources. And it’s also balanced by users (who may be farmers) wanting cheap storage so they just provide a lot of spare resources to achieve that. There is a lot of work still to do to fully grasp the interplay of this mechanism.

Does it lead to problems with the storecost where users end up with ‘storage for life’ which then further leads to safecoin supply problems because very few are recycling any more?

How is farm rate actually going to be calculated? The section establishing farming rate is crucially dependant on Sacrifical Chunks which no longer exist in the network. This creates a big hole in the ability to reason about safecoin distribution. I’ve substituted Spare Resources as an equivalent idea to Sacrificial Chunks.

Balancing Act

As it becomes harder to earn safecoin (ie too many spare resources) the least profitable resources will be removed. This increases the farm rate, which then also allows existing resources to earn more safecoin. So it’s rational for farmers to create scarcity.

As this happens storage costs go up. So more safecoin must be spent and supply increases.

These factors create a natural limit on how hard it is to farm safecoin, but I don’t know what that is likely to be.

This also means there’s a natural limit to the cheapness of storecost. It probably won’t ever get to ‘storage for life’ since farmers would remove resources and increase store cost before that happens.

Number Of Clients (NC)

I think storecost will be dominated by farmrate so I sorta gloss over NC. The reason I think this way is because farmrate is set by storage capacity which farmers are going to be aggressive about, but number of clients won’t be aggressive, it’ll just be normal people doing normal ‘webby’ stuff. Perhaps a naive perspective?!

The effect of NC is more clients = more expensive storage.

What is the expected range of NC?

Will farmers create many empty accounts just to make it more expensive and thus create more safecoin supply?

To my mind the NC parameter can be manipulated too easily and this could become a problem. It should not be part of this calculation. It creates an incentive for farmers to generate empty accounts so they can earn more by increasing costs and thus increase safecoin supply. Whether farmers end up seeing NC as the most effective lever to achieve that goal is not clear, but I’m wary of it.

Summary

The algorithm is very clever and elegant and has the right ‘direction’ about it, but I’m not convinced whether it will work as intended and create a viable and sustainable economic model for the network. There’s currently too many unknowns. Looking forward to tests and simulations but that’s a long way off so for now it’s all just thinking and talking.


#12

I need pictures! @JPL fancy doing some graphs for us? :grin:


#13

I would if only I could get my head around it!


#14

They can, although NC is intended as active client accounts. So they are doing PUT’s and paying clients. Quiescent clients should not count. Empty accounts are likely to be purged (create and charge up with a safecoin or lose it, type approach). Not set in stone as you will see, but I feel there is still wiggle room for bad behaviour there, although is may be minimised.

This ^^ 100% needs clarification, not hard to code, but certainly it is an area to consider, if client accounts can be created and used then it’s not so great. There is perhaps a way to average these across neighbours or similar. Potentially though it may not matter too much at all and just be disregarded as a measurement. Clients cause work, but much of that can now be offloaded to Adults (which never existed when the safecoin RFC was written). So I expect a tweak or two there.


#15

I was thinking if some bad behaviour of farmers could lead to make big “satelit” network with low quality connection to rest of network while own plenty of adults and elders connected with low latency and fast connection.

He would make plenty of fake clients to upload a lot of 1MB files and than try to download only ones which are located in “satelit” to make big profit for farmer.

I think that only possible is to reduce farmers rating to reduce his GETs.


#16

If satelit is part of the SAFE network then this will defeat them

Because the nodes would eventually be downgraded or rejected.


If you mean they leave the SAFE network and become their own then the coins will not be usable outside of that alt net.


The idea of a farmer attempting to find the chunks being stored in his own set of vaults and then GETing them to profit has been discussed a lot and basically the outcome seemed to suggest that a few factors will work against this being anymore than an interesting exercise and not profitable enough to continue. Caching and bandwidth are major drawbacks, then costs to actually do this be it their own electricity or cloud processing cost or cost of buying a botnet etc etc.


#17

I mean that they will not leave Safe network and low quality connection would be necessary only when uploading.

I will try found this discussion.

Files smaller than 1MB are cut to 3 chunks. But 1MB is 1 chunk?


#18

Any file over 3KB is split into at least 3 chunks. A chunk is a maximum of 1MB

  • 3KB ==> 3 x 1KB chunks
  • 60KB ==> 3 x 20KB chunks
  • 1MB ==> 3 x 1/3MB chunks
  • 50MB ==> 50 x 1MB chunks

OK I don’t see the need for that. Its not like any of their uploaded chunks will end up in their vaults unless they upload enough chunks to put more than one chunk in all the vaults in the network, and to be sure of getting one hit they need to upload a lot more than that. So if there are 100,000 vaults then to get one chunk into their vault they need to upload > 50,000 and to be sure they need to upload 200,000 chunks.

So now after uploading 200GB of data they have 1 or 2 or 3 chunks they can request. So now they request them, but hey caching kicks in and their vault is no longer supplying the chunks but one of the 6 to 10 hop nodes is supplying them.

If they had a slow link then it would be too hard to upload 200GB and they are likely to find their vaults barred for being so slow. So just go normal speed and not try going slow, it won’t help

Now to get enough chunks in or vaults to try and circumvent caching (chunks expire before requested again) then you are going to need like thousands and thousands of chunks in your vault that you uploaded. If each cache is 500MB and allowing for 100 different routes for your thousands of chunks and 8 hops each path then its 400GB worth of cache in the network for your chunks. This means you need to get >400GB of data you upload into your vaults. Since random distribution then for the 100,000 node (vault) network you will need to upload a total of 0.5 x 400GB x 100,000 ==> 20,000TB of data you need to upload in order to attempt this feat.