Perpetual Auction Currency

I suspect that there needs to be a network determined bounds (upper and lower) if there is to be bidding. What happens when the network comes close to the coin production limit? The bidders may have some indication of this if they monitor the global supply for sure and may bid accordingly, but those that don’t choose to track this may be taken advantage of - especially if the supply is pushed hard toward the ceiling.

Hence I think some sort of hybrid approach is needed - for the sake of giving the network the most information possible but also to conservatively manage the network.

3 Likes

Yeah, I think one interesting result of this is that your ability to participate in voting is increased the more popular data you hold.
So, the more data you hold, and the more popular it is, the more GETs you receive, and with every GET you are able to include your votes.

So, basically, the more valuable you are to the network, the more voting faster vote updates you get to do. Quite cool IMO. [had to edit that, to be more precise, it can be a very different thing]

Now, it is not entirely clear at the moment how valuable it is to have higher rates of voting. But one thing at least, is that you will be able to follow market sentiment better (less delay), that way having a better chance of being close to an NB when it arrives, thus getting higher rewards.


Reward distribution graph

I was playing around with using a Probability Density Function for reward distribution.
I made a simulation at Desmos that you can find here: https://www.desmos.com/calculator/uwmvssaism

The simulation allows you to loop through the size of a section (60-120 nodes) and watch their (somewhat) random bids and rewards plotted out as (x,y)-coordinates, with x-line being the bid, and the y-line being the reward.
Remember that the Neighbour Bid (NB) is what they want to get close to, and the NB is then split up according to the reward distribution (the sum of all rewards plotted, will be the NB).

There is a slider for the NB as well.

If you want to try a steeper or flatter distribution curve, go down to Probability Density Function folder, and adjust u with the slider.

There are a couple of other bid distributions that can be used as well, where the majority go above or below NB. The one that is used has a large part centered around NB. Still quite many out to the edges though. They all deviate at most + / - 10 % from NB.


Here are some notes from when I implemented it in code:

    // Sorting bids into exponentially differentiated buckets:
    // take diff between bid and NB
    // pipe through tanh (a zero centered "sigmoidal" function)
    // sort into buckets using PDF function
    // the bucket represents a share of the reward
    // every participant in the bucket splits the share between them

    // The aim of using bid-NB diffs is to equally favor closeness, regardless of sign.
    // The aim of piping through tanh is to map all possible bid-NB diffs into the PDF argument range.
    // The first aim of PDF is make reward proportional to closeness.
    // The second aim of PDF is to establish an exponential and continuous distribution of reward.
    // The aim of sharing in buckets is to keep bids from clustering.

    // The collective result of the above aims, are
    // - promotes keeping close to the common sentiment (favors passive bidders)
    // - promotes unique bids by decreasing the reward per bidder as bids cluster in buckets (favors active bidders) 
    // - promotes defectors when there is collusion
    // -- (ie. a close participant is rewarded most, when all the others are far away)

    // ***
    // Higher rewards give more participants
    // but skewing highest reward away from closeness, promotes bid movement - which eventually affects NB and through that attracts or repels participants.
    // So.. it seems skewing is just an indirect way of directly weighting reward?
    // The difference is that skewing promotes those who at that time are helping the network,
    // while directly adjusting rewards for all, relatively, rewards those who are less aligned with network needs.
    // The skewing does not impact the NB as fast as the weighting does.
    // So maybe the best result is achieved by combining reward weight with distribution skew, 
    // as to rapidly affect NB, as well as promote those who are aligned with network needs.
    // (Could the combination of the two reinforce the effects too much?)
    // The bucketing is more attenuated when NB is lower.
4 Likes

Wow, you’ve been busy @oetyng. A lot to go through here since my last post. A few comments/thoughts:

The network doesn’t need to know “why?”, it only needs to know whether the farmer resources (storage,bandwidth,latency, compute, elder counts, etc.) are increasing, decreasing, or constant/steady and what the current quantity is relative to system load or other targeted setpoints.

More is not necessarily better if it is just noise from farmers playing games. A “hard-coded” farming rate algorithm can be adaptive and flexible.

It might be fine to start with. In my view all major resource categories required to run the network should have their own reward rate. These include storage, bandwidth, latency, memory, and compute. In other words, if there is a resource proof for some farmer/vault performance trait, then the network should be offering a price for it.

True. Specifying a target growth rate from the beginning is the naive approach, but it offers a facade of predictability that is attractive to those in crypto space, and offers a simple way to motivate the network pricing algorithms. The optimal way is to have a means for objectively computing the current network growth rate, and then vary all inputs to the pricing function in real time in order to maximize growth at this instant. In the first scenario the best you will ever achieve is what you’ve selected as your setpoint, but you’ll likely fall short of it. You may not care if your goals were high enough, “shoot for the moon, at least you’ll hit the stars… etc”. In the second case, you’re adaptively determining what the absolute best is, so “hakuna matata”. Regardless, having a bidding process driven by the farmers is not the way to make any of this all work. Instead, you would want to give the bidding power to the network. The network could have a range of “ask” prices for resources, and farmers would reactively bid to accept those prices for a certain amount of network time, or leave. In a sense this is a fuzzy “take it or leave” approach.

Not true. It is biomimetic and mathematic. Consider fibonacci’s rabbits, they are a perfect analogy for section splits. It’s just what happens when you have successive binary divisions with no loss. That’s why it’s considered optimal growth in living systems. A few billion years of evolution has shown fibonacci growth to be favored for the survival living things. No need to reinvent the wheel here for synthetic life, just include it as part of the design. From my perspective a target growth rate is how SAFE establishes its own environment. We know that network growth and size it critically important to the success of the network. Some security issues that would require a lot effort to mitigate in a small network become insignificant for a large network. Specifying a targeted network growth rate from the beginning is a simple way to give purpose to all the pricing algorithms that determine PUT and GET costs. A crude analogy is the cruise control in an automobile. You set the desired speed, and the throttle is increased or decreased to match the wind load or hills you encounter.

I think that as a general rule we need to pick the right battles and always give SAFE the high ground. For example, consider two options for a perpetual auction. A) the farmers bid to determine what the farming reward should be and safe needs to give them what they ask for, or B) SAFE decides a range of prices at different volumes and the farmers bid to accept one of those of leave. For option A, no solid constraints will protect you from edge cases. In contrast, option B keeps SAFE in control while also maximizing farmer participation beyond the non-fuzzy take it or leave scenario.

Yes, see above. Non-linear controls optimization, multi-objective optimization to maximize the current growth rate or other objectives, subject to the constraint that it cannot exceed a target growth rate etc. Possible to eliminate the constraint and let the network growth rate be unbounded, but might not be prudent…

No.

No. I just think a framework where the farmers have direct control over the pricing is not as beneficial to the network as one where the network directly controls the price.

None of those things matter with regard to the farming reward. The network can’t offer to go to one’s home and fix the computers or restore power (yet :wink: ). All it can do is raise the price it offers higher and higher to incentivize as much participation as possible. If those scenarios happen, farmers aren’t going to be sitting at their computers demanding more safecoin from the network before they come back online. They won’t be online, period. The network always has to be operating, waiting, keeping all the data safe and secure. Which is why it needs to be in direct control of pricing in coordination with all its other tasks, and the only farmer provided information it can really count on is resource availability - right now.

11 Likes

I think there’s some value to knowing why a node has departed. If the network is going to look after itself it could do that best with high quality communications from the participants. How that exact messaging is done, I dunno yet. Lots of options.

Should the network only value things it can measure?

This touches on a very important point - promises. Bitcoin promises digital scarcity (in this case 21M coins max but that’s just an implementation detail). Basically everything else in the design of bitcoin stems from the promise of digital scarcity. That’s their core unique offering. The implementation of difficulty adjustment periods, mining, block times, fee market etc all exist only because of the scarcity promise.

What promises should SAFE be making? To my thinking the key promise is Perpetual Data. That’s unique to SAFE. Nothing else offers that. So the economy should be designed to give confidence to that feature. This matters because a fixed growth rate of resources is probably a stronger promise for the goal of perpetual data than a variable growth rate. I think fixed growth rate probably gives sub-optimal growth, but it does increase confidence in the promise.

Digital scarcity is another promise being made by SAFE. Is there a potential conflict between these two promises? How can we address that? Who decides?

On the topic of PAC, the promises become … weaker? stronger? It’s a really hard question to answer.

I don’t use storj or IPFS because the promise of data retention is too weak. The growth of SAFE is going to be very strongly tied to the promises it chooses to make.

I think it’s a good idea for us (both sides of the debate) to establish

  • is fibonacci growth the right growth for SAFE?
  • would bidding evolve into fibonacci growth?
  • if bidding results in different growth why is that better or worse than fibonacci growth?

The simple argument I would start with is data is growing exponentially, not fibonacci. So why use fibonacci growth for the network?

Just testing the waters here, should people decide the growth rate or the network? Maybe another way to ask the same question is what’s more important, cheap abundant storage or a predictable growth rate?

What are the edge cases? Genuine question.

I feel a dystopia meme is needed here…

I don’t think having the network in control is necessarily better. If the world wants to migrate to SAFE asap the network should not be able to say ‘wait a sec’.

A fixed algorithm is necessarily exclusive rather than inclusive. I lean toward inclusive every time. Yeah we’ll have to include the malicious people but I accept that (kinda the point of SAFE isn’t it).

Which framework is more beneficial to the end users? A fixed algorithm or bidding? Really tough question I know, because it’s about security as well as growth, so maybe we should also explore how fast can the network grow before it becomes unsecure growth? Is slow growth more secure than fast growth? Is growth correlated to security at all? Why is fixed growth desired? This is a big zoom-out on the topic but I think it’s needed. Maybe I’ll expand on this later.

I don’t want to benefit the network, I want to benefit users. They feed into each other but in the end I have confidence that users are always in a better position to address their problems than the network is. Why do users start using the network in the first place? As a way to address their problems. The network is for the users, not the other way around.


Hopefully this is a coherent response but I’ll have a deeper think about it and come back to you with some more strongly distilled ideas :slight_smile:

7 Likes

Just a comment on this.
Ultimately we would still be in control because we update the s/w and can change the algo through updates.

Actually the network needs to be able to say “wait a sec” because too fast an influx could make the network unstable or attacked easier. I understand you were meaning something slightly different but yes the network needs a measure of control over rates of increase etc.

7 Likes

Growth curves are particularly well understood in biology population stats … and probably a lot of related fields. I wonder if it possible for the network to use some sort of curve fitting algorithm to determine aproximately where on a standard growth curve the network is, and then use that information as part of this farming reward determination mechanism?

As opposed to just using a fixed curve assumption.

E.g. taking the data up to “now” and then fitting it to a standard growth curve.

5 Likes

That information is being piped through the chosen economic model. So if you want the network to know the details of exactly why a node leaves, then you need to have a detailed pricing structure for each characteristic (storage, bandwidth, compute, etc.)

Two types of information are available to the network, explicit and implicit. The explicit is that which directly measurable or given a metric, or dictated in code. These are easy to work into a pricing structure. The second are implicit relationships or meta data inferred from the explicitly measured information. These are harder to quantify. I think it’s ok to say that if it can’t be numerically measured, then it doesn’t exist as far as the network is concerned. At least for starters…

Your analogy to Bitcoin was spot on here. The digital scarsity, the halvening, these are driving functions where Bitcoin has set the stage and created the environment within which market participants interact.

This is exactly what I was getting at, but you said it better. Picking a target growth rate is sub optimal, but it is transparent and easy for the layman to understand, like “the halvening”.

Weaker.

If you recall I said “pick your poison”. Fibonacci seams reasonable, but I’m not married to it. Continuous optimization for max growth is nice, but it needs to be properly constrained. Harder to explain and not as transparent for a promise. Could be with the right marketing. The problem is that it is completely unpredictable. Setting a target growth rate that the control algorithms continuously push and pull towards is more conservative and builds trust.

Again, pick your poison. (Linear, hyperbolic, geometric, Fibonacci, exponential etc.) An exponential growth rate has interesting properties as well, but could be a hard master to satisfy. I suspect (pure conjecture) the unconstrained optimization scenario I mentioned would probably end up giving exponential or double exponential growth… at least over short periods. The point of the argument is not what the target growth curve is, but that it might be good to design the economic model around one.

An analogy is like asking the question,'should we decide what speed to set the cruise control on when driving from point a to b down the highway?" If your answer is yes, then you have a very good idea how long it will take to reach your destination. My hypothesis is that a predictable growth rate will ensure cheap abundant storage.

  1. collusion to increase cost of storage above optimal.
  2. farmer or client stupidity/inability to consider all network responsibility and determine a GET or PUT pricing strategy to maintain the network.

The first might be overblown in this topic, the second not so much.

I disagree. The network has a responsibility to the data it contains first, and the whims of the world second. The network needs to be able to say ‘wait a sec’ if that is what it needs to do in order to keep all the data safe and secure.

If by users you mean clients, then a ‘fixed’ algorithm will offer the lowest PUT price.

That’s why you specify a target growth rate from the beginning to constrain the optimization.

No. I think you need a middle ground that is “more than fast enough”.

Yes, absolutely. Grow, fast enough, or die.

Promises.

Not really. It is a symbiotic relationship. Or more accurately described as a cybernetic symbiosis. Users and farmers are at different ends of the supply chain and the network is the middle man to end all middle men. The farmers feed the network with a flow of resources, the devs feed it capabilities, the producers feed it content. The network feeds those resources, capabilities, and content to the clients. Safecoin flows in reverse. The network is the market maker and taker and the only entity that observes both sides of all transactions, in addition to all other network conditions. For these reasons it needs to be the ultimate authority on PUT and GET prices to ensure it’s own survival in the market.

9 Likes

Sadly, I’m a bit busy these days and I don’t have time to read everything so I’m not sure if this was addressed yet, but my impression is that “bidding” as a category is a viable method when one side is buying something from another side.

In our case however, we’d have to trust a possibly incomparably stronger side with deciding how much “charity” to hand out, not even in exchange of something but as a nominal “thank you” for a GET that’s already fulfilled.

4 Likes

I’m a bit busy as well, have a few things to respond to in this topic :slight_smile:

But, I wanted to just say shortly, that “bidding” is not a term I consider 100% accurate for what we’re doing here. This is a new type of interaction, in a new type of environment. For that reason, I don’t think you can say it is more or less viable based on what it has been used for previously.
When repurposing something, or inventing, you just find the way that it could work in the new setting, and it becomes a new thing.

I would call this phenomenon more of an “estimation”. What is it that is being estimated? We are estimating what we believe to be everyone else’s belief on what everyone else believe. It is very similar to a Keynesian beauty contest.

And so, really, this is more of a contest, than an auction, and as you might well know, there are no limits to what games we can create.
I feel that kind of mindset is more powerful when we try find new ways in a new system and concept.

4 Likes

With bitcoin there’s a property of it being ‘only money’. So people who buy and sell on exchanges and never touch the blockchain are still ‘doing bitcoin’ (for their purposes). Bitcoin would be turning away unbelievable amounts of people if it wasn’t for the possibility of offchain activity.

But SAFE is not ‘only money’, it’s data, and we can’t really expect that to move offchain like with bitcoin. That’s the whole point of the network, to put everything onchain, and to really suck it all in, not leave any reason to stay on clearnet.

If growth is predefined (even flexibly predefined) there’s a chance SAFE will not become a storage layer but just a coordination layer. It will be too time consuming or expensive to get data on the network so people will use it mainly for coordinating direct data transfer between each other.

Like bitcoin hash rate? There’s no problem there, so why for us?

This is the bitcoin mining growth curve (source):

I think this curve is a) incredibly difficult to predict if you’re in 2009, both the shape and the magnitude and b) indicative of possible growth in SAFE (ie uploads, downloads, storage, bandwidth).

Right… we can’t know or agree on the growth beforehand, so let’s design around predetermined growth. Sounds a bit paradoxical.

My hypothesis is that a floating growth rate will ensure cheap abundant storage. We are at an impasse…


A controversial way of framing the network-as-an-actor is, what if MaidSafe prefarmed all 2^32 coins and handed them out in some specific way to network participants. Why is replacing ‘MaidSafe’ with ‘The Network’ a better result?

8 Likes

Predictable may be a poor choice of wording, but it is somewhat accurate. A better term is “target” growth rate. Much like how bitcoin determines the difficulty, there is flexibility in looking at current conditions over a certain period (ex. a number of PUTS to represent a duration of “network time”) and adjusting the target growth over the next period based on these or longer term observations. In the next period (which could span weeks, months, or many years ) you use these targets to drive the control algorithms that adjust prices/rewards. The growth becomes predictable to the extent these controls are effective. I agree with you that it would be extremely difficult, if not impossible, to predict that curve in 2009. However, this unpredictability is a feature of the BTC mining algorithm, and so the same unpredictability is not necessarily SAFE’s destiny.

There is a big difference between SAFE and Bitcoin with regard to mining/farming control algorithms. Bitcoin is analogous to a rudimentary “open loop” control system in that it adjusts the mining difficulty according to the hashrate to maintain a predictable/target rate of coin discovery, but ignores all other factors. SAFE could be designed to operate the same way with a predictable rate of coin transfer to the farmers. That option is pretty boring and low performing. With SAFE we have the opportunity to form a far more powerful “closed loop” self-exciting/self-inhibiting control system. There are a lot more levers to pull with regard to PtF, PtD, PtP, PtC, GET and PUT rates. And these offer serious potential for maximizing growth in a way BTC or any other project never could. When designing these systems, decisions need to be made as to what the objectives are, and what the constraints are. Otherwise you end up needing 10,000 monkeys with typewriters and a lot of time on your hands.

BTC was given a prime directive of predictable coin release rate far into the future. All other BTC network properties such as fiat price, hash rate, difficulty etc. were either intentionally designed to enforce this predictability or emerge as a result of it. In my opinion, maximum network growth of SAFE is the objective we need to be looking at, with control algorithms that adjust reward or cost rates accordingly. In this scenario SAFE becomes an intelligent agent that is capable of self-regulation.

Below is a modified version of the image you posed above. I used your description of BTC hashrate as an example growth metric. Rather than use wallclock time the x axis is BTC transaction count as a proxy for “network time”. An exponential curve (green) and a fibonacci curve (orange) have been included as example target growth rates.
image

The BTC growth curve in blue shows a network controller having difficulty maintaining it’s growth target. Under this scenario, a hypothetical control system for SAFE would be pushing the economics to follow the target curve. Below 3e8 transactions, the network would have benefited greatly had the faulty controller offered steady stimulus. From 3e8 to 3.5e8 transactions things are getting too hot and the network controller should have been limiting growth to build up its reserves. From about 3.5e8 transitions onward it should be pedal to the metal since growth is faltering.

In the interest of time this is a rather simple example. As @TylerAbeoJordan intuited above, it’s far better to fit targets and adjust controls incrementally over shorter periods of “network time” to improve adaptability. The chosen duration of the period can also be adaptive and determined by the network. “Optimal Control” is a well studied field in academia.

2 Likes

Or can be used elliot wave prediction scheme ?
Bitcoins looks like this: (full screen picture)
https://pbs.twimg.com/media/DkfND3OW4AIUrHy?format=jpg&name=large

Medium topic:
https://medium.com/@Magnr/can-elliott-waves-really-predict-the-price-of-bitcoin-970ca430c7ff

1 Like

At the risk of sounding arrogant while simply stating the obvious: in this domain, if we include the need for prediction or forecast, even if in a loose sense, we are doomed to failure.

As @mav already noted, bitcoin’s adoption curve was completely unforeseeable. Nobody can predict if its price will double next week and nobody can reliably ascertain how much more mining that would attract and how quickly. It’s plain impossible because even if we can predict 99.9% of the price moves, the 0.1% biggest ones (that we can’t predict) will be more consequential than the rest.

So, we may as well not waste time on something that’s impossible and instead go into a direction that can at least theoretically work: reacting to changes in supply and demand in a way that would constrain the network to stay within a healthy range of parameters. I already mentioned something like this here:

Let’s also divine Saturn’s influence for good measure :face_with_hand_over_mouth:

Have you seen proof (a record of trades) from anybody that they made money using that method, reliably, time and time again, year in and year out? If not, you have no reason to believe there’s any merit to Elliot’s idea. Basically, he was just another guy who thought he could find more information in the signal than there was really there.

5 Likes

This is exactly what an optimal control algorithm (as alluded to in my comments above) does. The healthy range of parameters you describe are constraints. I never mentioned anything about price prediction. Growth maximization consists of not one but a set of objectives. Optimal control consists of maximizing objectives or minimizing the error between the current state of the system and your target state while staying within the limits posed by one’s constraints. Above I tried to describe things in an easy to digest manner. Looks like I’ve failed miserably.

3 Likes

I keep seeing this mention of health parameters (from many places myself included, so not directed to you specifically jpell) but I don’t see any attempt to put actual numerical bounds on the parameters.

When is this health parameters algorithm idea going to be taken seriously enough to produce some numbers?

Even the most basic parameter, spare space, has not been addressed in any tangible way. Should it be 2x stored space? 8x? 20x? 0.1x? Why? What number?

Should we look at frequency and magnitude of outages?
Should we look at expected vault size and bandwidth availability?
Should we look into energy consumption and waste?
What should we be investigating to put meaningful numbers on healthy vs unhealthy spare storage?

Even if you don’t ‘know’ please suggest a number and reason and confidence.

My number for spare storage is 10x (10GB spare for ever 1GB stored). I’m about 20% confident that’s suitable. I feel it might lead to more waste than necessary. But I think it must be fairly high so there’s confidence of redundant copies being retained perpetually even in extreme events such as 1/3 of the entire network going offline. Please tell me why I’m wrong.

5 Likes

I suggest moving this discussion to a new topic, perhaps a “SAFE Network Health Metrics” thread. That way we don’t take the PAC thread off-topic any further. The discussion in “Safecoin Revised” could also transition there. I could create the new thread now, but I think it is more fitting if you do considering the great google doc you put together on the subject.

2 Likes

I conveniently forgot about this thread and wrote about a Great New Idea in another that, now that I looked, is really just a variation on this one. Here it is:

  1. Sections keep (or select upon receiving a request?) a pool of M vaults, more than the necessary N to store a chunk, and get their bids.

  2. Requests to store a chunk will specify the P largest acceptable price.

  3. The section assigns the chunk to the N vaults with the lowest bids if their sum is not larger than P.

BAM, FREE MARKET!

Notes:

  • Overly greedy vaults would remain underutilized and poor.
  • Stupid vaults would fill up and make little money.
  • Vaults with less free space would increase their price. (Explained right below.)
  • Users unwilling to pay up would not get their data stored.

Potential extensions:

  • Sections would keep and split the change.
  • Vaults may be notified about the per-vault acceptable price received, as potentially useful information for placing their bids.
  • I’m not sure if sections work with the same set of vaults. If so, they can just keep track of the bids continuously.

Clever vaults, if they couldn’t grow, would set their prices so as to always keep some free storage around for times when the rest of the vaults are also getting full, lest they miss out when the price got higher. It’s similar to airlines that ask more and more as they have fewer and fewer free seats left.

The strength of this model is that it isn’t sensitive to the exact method vaults used to set the price. The role sections play is also simple and minimal.

Provided vaults act in their self-interest, chunks will get stored at the lowest possible price and the network will be reasonably protected from getting full. As with any economic actor, the specifics may vary from vault to vault as each tries to game the system and come out on top, but the end result would still be a price set by the market somewhere around the “correct” value.

7 Likes

Aah, excellent! Will read more thoroughly later, but thanks for evolving the ideas!!

2 Likes

OK, let’s look at this.

  1. This breaks the current pattern of storing a chunk with the N vault addresses closest (XOR) to the chunk hash.
    While that is a simple and neat way to deterministically partition the data, the idea is interesting enough to try find some good compromise.
    If we want to keep this pattern, we need some indexing. For example, the same pattern could be used to store the map to the actual holders of the data. We add one indirection, with the resulting added latency. So, to explain: what is sent to the N closest vaults would not be the actual chunk, but a map of which vaults hold the chunk.
    That’s a pretty simple compromise, and the idea of indirection has been floated before in other contexts. The downsides are the extra hop and that it adds a little bit of metadata overhead to each stored chunk.

  2. This is interesting, and a little bit complicated, but not necessarily much. Most apps want to be able to expect a write to come through when requested. Having the request bounce because the P is not high enough, would not be acceptable in most cases.
    So, every app would need an additional layer implementing some strategy; the user could for example set a base line P or retrieve it as a percent of some variable reference value; a max P, or a max percent above some variable reference value, and the strategy would try use minimum possible P starting at base line, and in case of bounce increase P and retry, until success or upper limit has been reached.
    So, this stuff is plumbing that would be very tedious to reimplement for every app. Not sure it would be suitable for the core either. As with any such reusable code, there will then probably be some library for every language, that apps can simply use.

  3. This is cool.
    It would require N bids with a sum lower or equal to P. The winning bids get to hold the chunk, which is their ticket to make rewards, as every time they can serve the chunk on a request, they will be rewarded.
    However… what exactly should they be rewarded? Let’s come back to that question, because it is very interesting, both in the context of this idea and otherwise.


Yes, if they go too high with their bids, they won’t have any chunks and won’t recieve rewards.

This is a great property, I don’t see it breaking any principles or values of the network. It drives vaults to place fair bids.

Do you mean if they underbid heavily, and thereby always winning the chunks, until they are full?

They would have the expected number of GETs for the chunks they stored, regardless of how they got the chunks (many at once by underbidding, or some at a time by trying to place some higher bids). So, the question is about how the rewards are calculated. We were already at this question once above, let’s postpone it once again.

Mm, maybe. There may never come a time when the rest of the vaults are also getting full, so there’s an alternative cost to keeping that slot open. It’s a bit of a gamble.

For this reason, I think it’s maybe not so sure that vaults with less free space would actually increase their price. Maybe by filling it up, and receiving the rewards they would for those extra GETs, they earn more than waiting for the others to get full (which might not happen).

Yep. And this doesn’t actually change anything. There’s always a store cost, and how this cost is set shouldn’t be a concern for the end user. If they are not willing to pay the current store cost, there’s no storing.


I’m not sure I fully get how you mean that it isn’t sensitive to the exact method vaults use to set the price (the bid). I believe PAC isn’t either, so I guess the strength you mean, is not compared to PAC, but then to what?

About the role sections play, we’ll look closer at that, I’m not so sure it is simpler than PAC in the end, but we’ll see.

I think it is a very good way of getting a price of the storage nearly aligned to fundamentals (and price refers to both network currency and fiat).

A wonderful aspect of this is that home vault operators are able to compete with commercial large scale operators, since they don’t require to offset investments (to the same degree at least), and can go lower in the bidding. That in itself is a very powerful contributor to the core principles of the network: decentralization, and (if I may make a somewhat personal interpretation) power to the people :slight_smile:

Reasonably protected from getting full… I think what actually constitutes this protection, is that the rewards and store cost are closely aligned to fundamentals. If rewards are too low, not enough vaults will be available => network gets full. If store costs are too high, not enough data comes in, not enough data is then requested (since most of the requests are for relatively fresh data), i.e. not enough rewards paid out for the uptime and not enough vaults will be available => network gets full.

So, yes, the network should be reasonably protected from getting full, since it seems to me this system would ensure rewards and store cost being well aligned with fundamentals (fiat price of storage, bandwidth, electricity, etc. etc.).

This is however given that we have a good system for the rewards. So, now I get to the question I was postponing a couple of times earlier in this post.

Rewards

How do you see the payout of the rewards?

If we have a request to store a chunk, with a price P that the user is prepared to pay, and we have N vaults with a sum of bids that, for simplicity, equals P. Then what? At every GET request for this chunk, a reward is paid to these N vaults.

  • How much is paid?
  • How much is paid for the 1:st GET, the 100:th GET, the n :th GET?
  • Are the elders now supposed to track P for this chunk forever?

We don’t know how many times this chunk will be requested. Maybe it is part of the most popular piece of content in human history. Maybe the value of the network currency was much lower when the chunk was stored, than 20 years later. So if the sum of the winning bids was P, and the value of the coin has gone up 3000 times in 20 years… Are they still paid P, even though currently chunks are stored at ~P / 3000?

I don’t think it is a good solution that elders keep a map of every single chunk and their P.

In PAC, elders only need to keep track of the current bids of vaults in the section, which would be at most 120 entries. The number of chunks in a section can be very large. It doesn’t seem very efficient. We’ll have to get back to this question.

Then there is the question of long term overall network balance. If the sum of all store costs are supposed to eventually equal the sum of all rewards (there’s no other way to eventually reach an equilibrium of issued coins), then the concept of read:write ratio has to be used. How many GETs are there per PUT?

I have before mentioned one way to deal with this: always expect that the number of GETs a chunk will have during its life time, is corresponding to the current read : write ratio of the network.

If for every write, there are 100 reads, that means an average chunk can expect 100 GETs. Now, the distribuition of popularity will make some chunks have 1 trillion GETs (or more…, just as an example) and some 0. For a vault with millions of chunks, this would however even out, so they would – with something of a normal distribution probably (especially since data is now not placed by hash, see @mav’s findings in chunk distribution) – also have close to 100 : 1 read write ratio if the network does.

So if I store a chunk today, and read write ratio is 10 : 1 (early network, everyone is uploading like mad), and a few years later it is 100 : 1… how is this reconciled?

What I have suggested, is that rewards should always be calculated based on current read:write ratio.

A section knows current read write ratio by [having elders] simply book keeping the PUT and GET requests and calculate the ratio. It’s a very simple operation, the data size to store this information is practically nothing.

So, to go back to the example:

We have a stored chunk, the winning bids of the N vaults summed to P. At some moment, the chunk is requested, and the elders at that time have a read : write ratio of 100 : 1 registered.

So, let’s say we did use the seemingly inefficient way of storing a map of chunk and P, then we would reward a total of R = P / 100 to the N vaults, and each vault would get R / N.

But, if we 20 years later have 3000x increase in valuation, all the old chunks would generate absurdly large amounts of reward.

Maybe that isn’t a problem or a bad thing. After all, the old data is probably dust compared to all recent data, and probably never requested either. So maybe not a bad thing that the reward for holding it is actually increasing as the value of the currency increases. That is a good motivation for vaults to get in early, and make sure to hold the data forever.

And maybe it is a problem. What do you think?

Security

How can this system be gamed?

Summary

I actually think this is a very different idea than PAC.

We we’re talking just now of renaming PAC, from perpetual auction currency , to particpant assessment currency, because it is not exactly an auction such that it would be thought of by many. Auction leads the thoughts to market economy and competition. While there is some sort of competition, the core of the idea is more of a contribution to the network, in assessment of the common anticipation of the value (a price discovery action), and those who do the best job of this assessment get rewarded.

This idea however, is a very typical auction, very typical market economy. I actually think the name perpetual auction currency, better suits it, while the original PAC should be called participant assessment currency, with risk of confusion… :slight_smile:

[The original] PAC doesn’t change anything in how data is handled. This idea is a bit more invasive in that way.

All in all, it provides a couple of unique properties that no proposals so far have. I see no obvious security flaws at a quick glance. It is presented as less complicated than PAC, which seems true for the security aspect, but considering the invasion on current storage logic, there seems to be some complications popping up there (maybe more, maybe less).

4 Likes

It doesn’t add the type of complexity you assume. We take M > N vaults the same way (closest XOR distance) and just not use all of them, only the lowest N bids. Sections already need to store which vaults store which chunks because the network evolves so “closest XOR distance” is a moving target that they can’t use to just recalculate the location each time.

We already need to deal with this if the user doesn’t have sufficient funds. No additional complexity here.

I don’t want to water down the original argument, so let me mention this just as a comment. Specifying a maximum price has the additional benefit of removing the potential surprise for when a user would be charged a lot more than they expected. It’s important enough that we’d need to address it some way sooner or later.

It would have to be implemented in the Browser. Users could just specify a safecoin or dollar/euro/etc amount in the browser and be safe knowing their money will not run out due to some fluke, flash-crash, or other unforeseen event.

Again, we’re not talking about something that we don’t already have to deal with.

You have an important point here and later on about GETs. I’ve been mostly away for some time (and probably continue to be so as I’m quite busy these days) so I lost some of my intuitions about the network and forgot payment is for GETs not PUTs. :sweat_smile: It is an interesting aspect to the network but it does make it real hard to build a free market around it…

It certainly invalidates most of my points and I’ll refrain from monkey-patching it (which means I can’t answer to many of your questions that assume rewards for GETs only). Instead, I’ll simply propose also rewarding storage directly. I know it’s a big one, so bring on the heat :smirk:

That’s the best thing about markets. We already assume everybody will be trying to “game” it (come out on top) and then we turn that around to be the very mechanism to find the correct price.

Compared to a synthetic price-setting mechanism that isn’t based on a market, because it’s harder to game something where we already assume aggressive assertion of self-interest v.s. following a rule because otherwise we’ll get punished.


As I wrote it above, I believe it would made a lot more sense economically if at least some of the payment went directly to the vaults when data is stored. I can’t imagine what else than a free market could reliably align the self-interest of vaults and the communal interest of the network, including features such as saving space by pushing the price higher in the expectation of larger profits in the future.

I don’t think much work has been done about something important that’s related to payments for GETs. Realistically, most chunks will be stored once and requested a handful of times if ever while a very few chunks will be requested billions of times and the vaults that end up storing them will win the jackpot. Small vaults will have a very low chance not only of winning but also to just fare well as the sample mean (the average payment a vault of a certain size receives) of such a Paretian distribution approaches the real mean (the average payment of all vaults on the network) rather slowly, only when the number of samples (chunks stored) grows big.

Effectively, small vaults can do better by dedicating much or all of their storage to just caching popular chunks. This suggests maybe we should pay first for PUTs to encourage the storing of new data and then for GETs as well to encourage caching popular data. It would also introduce not only the possibility to build a market for new data but also flexibility for users to make the most out of their vaults. I think I’ll start a thread about this.

4 Likes