Perpetual Auction Currency

Just a comment on this.
Ultimately we would still be in control because we update the s/w and can change the algo through updates.

Actually the network needs to be able to say “wait a sec” because too fast an influx could make the network unstable or attacked easier. I understand you were meaning something slightly different but yes the network needs a measure of control over rates of increase etc.

7 Likes

Growth curves are particularly well understood in biology population stats … and probably a lot of related fields. I wonder if it possible for the network to use some sort of curve fitting algorithm to determine aproximately where on a standard growth curve the network is, and then use that information as part of this farming reward determination mechanism?

As opposed to just using a fixed curve assumption.

E.g. taking the data up to “now” and then fitting it to a standard growth curve.

5 Likes

That information is being piped through the chosen economic model. So if you want the network to know the details of exactly why a node leaves, then you need to have a detailed pricing structure for each characteristic (storage, bandwidth, compute, etc.)

Two types of information are available to the network, explicit and implicit. The explicit is that which directly measurable or given a metric, or dictated in code. These are easy to work into a pricing structure. The second are implicit relationships or meta data inferred from the explicitly measured information. These are harder to quantify. I think it’s ok to say that if it can’t be numerically measured, then it doesn’t exist as far as the network is concerned. At least for starters…

Your analogy to Bitcoin was spot on here. The digital scarsity, the halvening, these are driving functions where Bitcoin has set the stage and created the environment within which market participants interact.

This is exactly what I was getting at, but you said it better. Picking a target growth rate is sub optimal, but it is transparent and easy for the layman to understand, like “the halvening”.

Weaker.

If you recall I said “pick your poison”. Fibonacci seams reasonable, but I’m not married to it. Continuous optimization for max growth is nice, but it needs to be properly constrained. Harder to explain and not as transparent for a promise. Could be with the right marketing. The problem is that it is completely unpredictable. Setting a target growth rate that the control algorithms continuously push and pull towards is more conservative and builds trust.

Again, pick your poison. (Linear, hyperbolic, geometric, Fibonacci, exponential etc.) An exponential growth rate has interesting properties as well, but could be a hard master to satisfy. I suspect (pure conjecture) the unconstrained optimization scenario I mentioned would probably end up giving exponential or double exponential growth… at least over short periods. The point of the argument is not what the target growth curve is, but that it might be good to design the economic model around one.

An analogy is like asking the question,'should we decide what speed to set the cruise control on when driving from point a to b down the highway?" If your answer is yes, then you have a very good idea how long it will take to reach your destination. My hypothesis is that a predictable growth rate will ensure cheap abundant storage.

  1. collusion to increase cost of storage above optimal.
  2. farmer or client stupidity/inability to consider all network responsibility and determine a GET or PUT pricing strategy to maintain the network.

The first might be overblown in this topic, the second not so much.

I disagree. The network has a responsibility to the data it contains first, and the whims of the world second. The network needs to be able to say ‘wait a sec’ if that is what it needs to do in order to keep all the data safe and secure.

If by users you mean clients, then a ‘fixed’ algorithm will offer the lowest PUT price.

That’s why you specify a target growth rate from the beginning to constrain the optimization.

No. I think you need a middle ground that is “more than fast enough”.

Yes, absolutely. Grow, fast enough, or die.

Promises.

Not really. It is a symbiotic relationship. Or more accurately described as a cybernetic symbiosis. Users and farmers are at different ends of the supply chain and the network is the middle man to end all middle men. The farmers feed the network with a flow of resources, the devs feed it capabilities, the producers feed it content. The network feeds those resources, capabilities, and content to the clients. Safecoin flows in reverse. The network is the market maker and taker and the only entity that observes both sides of all transactions, in addition to all other network conditions. For these reasons it needs to be the ultimate authority on PUT and GET prices to ensure it’s own survival in the market.

9 Likes

Sadly, I’m a bit busy these days and I don’t have time to read everything so I’m not sure if this was addressed yet, but my impression is that “bidding” as a category is a viable method when one side is buying something from another side.

In our case however, we’d have to trust a possibly incomparably stronger side with deciding how much “charity” to hand out, not even in exchange of something but as a nominal “thank you” for a GET that’s already fulfilled.

4 Likes

I’m a bit busy as well, have a few things to respond to in this topic :smiling_face:

But, I wanted to just say shortly, that “bidding” is not a term I consider 100% accurate for what we’re doing here. This is a new type of interaction, in a new type of environment. For that reason, I don’t think you can say it is more or less viable based on what it has been used for previously.
When repurposing something, or inventing, you just find the way that it could work in the new setting, and it becomes a new thing.

I would call this phenomenon more of an “estimation”. What is it that is being estimated? We are estimating what we believe to be everyone else’s belief on what everyone else believe. It is very similar to a Keynesian beauty contest.

And so, really, this is more of a contest, than an auction, and as you might well know, there are no limits to what games we can create.
I feel that kind of mindset is more powerful when we try find new ways in a new system and concept.

4 Likes

With bitcoin there’s a property of it being ‘only money’. So people who buy and sell on exchanges and never touch the blockchain are still ‘doing bitcoin’ (for their purposes). Bitcoin would be turning away unbelievable amounts of people if it wasn’t for the possibility of offchain activity.

But SAFE is not ‘only money’, it’s data, and we can’t really expect that to move offchain like with bitcoin. That’s the whole point of the network, to put everything onchain, and to really suck it all in, not leave any reason to stay on clearnet.

If growth is predefined (even flexibly predefined) there’s a chance SAFE will not become a storage layer but just a coordination layer. It will be too time consuming or expensive to get data on the network so people will use it mainly for coordinating direct data transfer between each other.

Like bitcoin hash rate? There’s no problem there, so why for us?

This is the bitcoin mining growth curve (source):

I think this curve is a) incredibly difficult to predict if you’re in 2009, both the shape and the magnitude and b) indicative of possible growth in SAFE (ie uploads, downloads, storage, bandwidth).

Right… we can’t know or agree on the growth beforehand, so let’s design around predetermined growth. Sounds a bit paradoxical.

My hypothesis is that a floating growth rate will ensure cheap abundant storage. We are at an impasse…


A controversial way of framing the network-as-an-actor is, what if MaidSafe prefarmed all 2^32 coins and handed them out in some specific way to network participants. Why is replacing ‘MaidSafe’ with ‘The Network’ a better result?

8 Likes

Predictable may be a poor choice of wording, but it is somewhat accurate. A better term is “target” growth rate. Much like how bitcoin determines the difficulty, there is flexibility in looking at current conditions over a certain period (ex. a number of PUTS to represent a duration of “network time”) and adjusting the target growth over the next period based on these or longer term observations. In the next period (which could span weeks, months, or many years ) you use these targets to drive the control algorithms that adjust prices/rewards. The growth becomes predictable to the extent these controls are effective. I agree with you that it would be extremely difficult, if not impossible, to predict that curve in 2009. However, this unpredictability is a feature of the BTC mining algorithm, and so the same unpredictability is not necessarily SAFE’s destiny.

There is a big difference between SAFE and Bitcoin with regard to mining/farming control algorithms. Bitcoin is analogous to a rudimentary “open loop” control system in that it adjusts the mining difficulty according to the hashrate to maintain a predictable/target rate of coin discovery, but ignores all other factors. SAFE could be designed to operate the same way with a predictable rate of coin transfer to the farmers. That option is pretty boring and low performing. With SAFE we have the opportunity to form a far more powerful “closed loop” self-exciting/self-inhibiting control system. There are a lot more levers to pull with regard to PtF, PtD, PtP, PtC, GET and PUT rates. And these offer serious potential for maximizing growth in a way BTC or any other project never could. When designing these systems, decisions need to be made as to what the objectives are, and what the constraints are. Otherwise you end up needing 10,000 monkeys with typewriters and a lot of time on your hands.

BTC was given a prime directive of predictable coin release rate far into the future. All other BTC network properties such as fiat price, hash rate, difficulty etc. were either intentionally designed to enforce this predictability or emerge as a result of it. In my opinion, maximum network growth of SAFE is the objective we need to be looking at, with control algorithms that adjust reward or cost rates accordingly. In this scenario SAFE becomes an intelligent agent that is capable of self-regulation.

Below is a modified version of the image you posed above. I used your description of BTC hashrate as an example growth metric. Rather than use wallclock time the x axis is BTC transaction count as a proxy for “network time”. An exponential curve (green) and a fibonacci curve (orange) have been included as example target growth rates.
image

The BTC growth curve in blue shows a network controller having difficulty maintaining it’s growth target. Under this scenario, a hypothetical control system for SAFE would be pushing the economics to follow the target curve. Below 3e8 transactions, the network would have benefited greatly had the faulty controller offered steady stimulus. From 3e8 to 3.5e8 transactions things are getting too hot and the network controller should have been limiting growth to build up its reserves. From about 3.5e8 transitions onward it should be pedal to the metal since growth is faltering.

In the interest of time this is a rather simple example. As @TylerAbeoJordan intuited above, it’s far better to fit targets and adjust controls incrementally over shorter periods of “network time” to improve adaptability. The chosen duration of the period can also be adaptive and determined by the network. “Optimal Control” is a well studied field in academia.

2 Likes

Or can be used elliot wave prediction scheme ?
Bitcoins looks like this: (full screen picture)
https://pbs.twimg.com/media/DkfND3OW4AIUrHy?format=jpg&name=large

Medium topic:
https://medium.com/@Magnr/can-elliott-waves-really-predict-the-price-of-bitcoin-970ca430c7ff

1 Like

At the risk of sounding arrogant while simply stating the obvious: in this domain, if we include the need for prediction or forecast, even if in a loose sense, we are doomed to failure.

As @mav already noted, bitcoin’s adoption curve was completely unforeseeable. Nobody can predict if its price will double next week and nobody can reliably ascertain how much more mining that would attract and how quickly. It’s plain impossible because even if we can predict 99.9% of the price moves, the 0.1% biggest ones (that we can’t predict) will be more consequential than the rest.

So, we may as well not waste time on something that’s impossible and instead go into a direction that can at least theoretically work: reacting to changes in supply and demand in a way that would constrain the network to stay within a healthy range of parameters. I already mentioned something like this here:

Let’s also divine Saturn’s influence for good measure :face_with_hand_over_mouth:

Have you seen proof (a record of trades) from anybody that they made money using that method, reliably, time and time again, year in and year out? If not, you have no reason to believe there’s any merit to Elliot’s idea. Basically, he was just another guy who thought he could find more information in the signal than there was really there.

5 Likes

This is exactly what an optimal control algorithm (as alluded to in my comments above) does. The healthy range of parameters you describe are constraints. I never mentioned anything about price prediction. Growth maximization consists of not one but a set of objectives. Optimal control consists of maximizing objectives or minimizing the error between the current state of the system and your target state while staying within the limits posed by one’s constraints. Above I tried to describe things in an easy to digest manner. Looks like I’ve failed miserably.

3 Likes

I keep seeing this mention of health parameters (from many places myself included, so not directed to you specifically jpell) but I don’t see any attempt to put actual numerical bounds on the parameters.

When is this health parameters algorithm idea going to be taken seriously enough to produce some numbers?

Even the most basic parameter, spare space, has not been addressed in any tangible way. Should it be 2x stored space? 8x? 20x? 0.1x? Why? What number?

Should we look at frequency and magnitude of outages?
Should we look at expected vault size and bandwidth availability?
Should we look into energy consumption and waste?
What should we be investigating to put meaningful numbers on healthy vs unhealthy spare storage?

Even if you don’t ‘know’ please suggest a number and reason and confidence.

My number for spare storage is 10x (10GB spare for ever 1GB stored). I’m about 20% confident that’s suitable. I feel it might lead to more waste than necessary. But I think it must be fairly high so there’s confidence of redundant copies being retained perpetually even in extreme events such as 1/3 of the entire network going offline. Please tell me why I’m wrong.

5 Likes

I suggest moving this discussion to a new topic, perhaps a “SAFE Network Health Metrics” thread. That way we don’t take the PAC thread off-topic any further. The discussion in “Safecoin Revised” could also transition there. I could create the new thread now, but I think it is more fitting if you do considering the great google doc you put together on the subject.

2 Likes

I conveniently forgot about this thread and wrote about a Great New Idea in another that, now that I looked, is really just a variation on this one. Here it is:

  1. Sections keep (or select upon receiving a request?) a pool of M vaults, more than the necessary N to store a chunk, and get their bids.

  2. Requests to store a chunk will specify the P largest acceptable price.

  3. The section assigns the chunk to the N vaults with the lowest bids if their sum is not larger than P.

BAM, FREE MARKET!

Notes:

  • Overly greedy vaults would remain underutilized and poor.
  • Stupid vaults would fill up and make little money.
  • Vaults with less free space would increase their price. (Explained right below.)
  • Users unwilling to pay up would not get their data stored.

Potential extensions:

  • Sections would keep and split the change.
  • Vaults may be notified about the per-vault acceptable price received, as potentially useful information for placing their bids.
  • I’m not sure if sections work with the same set of vaults. If so, they can just keep track of the bids continuously.

Clever vaults, if they couldn’t grow, would set their prices so as to always keep some free storage around for times when the rest of the vaults are also getting full, lest they miss out when the price got higher. It’s similar to airlines that ask more and more as they have fewer and fewer free seats left.

The strength of this model is that it isn’t sensitive to the exact method vaults used to set the price. The role sections play is also simple and minimal.

Provided vaults act in their self-interest, chunks will get stored at the lowest possible price and the network will be reasonably protected from getting full. As with any economic actor, the specifics may vary from vault to vault as each tries to game the system and come out on top, but the end result would still be a price set by the market somewhere around the “correct” value.

7 Likes

Aah, excellent! Will read more thoroughly later, but thanks for evolving the ideas!!

2 Likes

OK, let’s look at this.

  1. This breaks the current pattern of storing a chunk with the N vault addresses closest (XOR) to the chunk hash.
    While that is a simple and neat way to deterministically partition the data, the idea is interesting enough to try find some good compromise.
    If we want to keep this pattern, we need some indexing. For example, the same pattern could be used to store the map to the actual holders of the data. We add one indirection, with the resulting added latency. So, to explain: what is sent to the N closest vaults would not be the actual chunk, but a map of which vaults hold the chunk.
    That’s a pretty simple compromise, and the idea of indirection has been floated before in other contexts. The downsides are the extra hop and that it adds a little bit of metadata overhead to each stored chunk.

  2. This is interesting, and a little bit complicated, but not necessarily much. Most apps want to be able to expect a write to come through when requested. Having the request bounce because the P is not high enough, would not be acceptable in most cases.
    So, every app would need an additional layer implementing some strategy; the user could for example set a base line P or retrieve it as a percent of some variable reference value; a max P, or a max percent above some variable reference value, and the strategy would try use minimum possible P starting at base line, and in case of bounce increase P and retry, until success or upper limit has been reached.
    So, this stuff is plumbing that would be very tedious to reimplement for every app. Not sure it would be suitable for the core either. As with any such reusable code, there will then probably be some library for every language, that apps can simply use.

  3. This is cool.
    It would require N bids with a sum lower or equal to P. The winning bids get to hold the chunk, which is their ticket to make rewards, as every time they can serve the chunk on a request, they will be rewarded.
    However… what exactly should they be rewarded? Let’s come back to that question, because it is very interesting, both in the context of this idea and otherwise.


Yes, if they go too high with their bids, they won’t have any chunks and won’t recieve rewards.

This is a great property, I don’t see it breaking any principles or values of the network. It drives vaults to place fair bids.

Do you mean if they underbid heavily, and thereby always winning the chunks, until they are full?

They would have the expected number of GETs for the chunks they stored, regardless of how they got the chunks (many at once by underbidding, or some at a time by trying to place some higher bids). So, the question is about how the rewards are calculated. We were already at this question once above, let’s postpone it once again.

Mm, maybe. There may never come a time when the rest of the vaults are also getting full, so there’s an alternative cost to keeping that slot open. It’s a bit of a gamble.

For this reason, I think it’s maybe not so sure that vaults with less free space would actually increase their price. Maybe by filling it up, and receiving the rewards they would for those extra GETs, they earn more than waiting for the others to get full (which might not happen).

Yep. And this doesn’t actually change anything. There’s always a store cost, and how this cost is set shouldn’t be a concern for the end user. If they are not willing to pay the current store cost, there’s no storing.


I’m not sure I fully get how you mean that it isn’t sensitive to the exact method vaults use to set the price (the bid). I believe PAC isn’t either, so I guess the strength you mean, is not compared to PAC, but then to what?

About the role sections play, we’ll look closer at that, I’m not so sure it is simpler than PAC in the end, but we’ll see.

I think it is a very good way of getting a price of the storage nearly aligned to fundamentals (and price refers to both network currency and fiat).

A wonderful aspect of this is that home vault operators are able to compete with commercial large scale operators, since they don’t require to offset investments (to the same degree at least), and can go lower in the bidding. That in itself is a very powerful contributor to the core principles of the network: decentralization, and (if I may make a somewhat personal interpretation) power to the people :slight_smile:

Reasonably protected from getting full… I think what actually constitutes this protection, is that the rewards and store cost are closely aligned to fundamentals. If rewards are too low, not enough vaults will be available => network gets full. If store costs are too high, not enough data comes in, not enough data is then requested (since most of the requests are for relatively fresh data), i.e. not enough rewards paid out for the uptime and not enough vaults will be available => network gets full.

So, yes, the network should be reasonably protected from getting full, since it seems to me this system would ensure rewards and store cost being well aligned with fundamentals (fiat price of storage, bandwidth, electricity, etc. etc.).

This is however given that we have a good system for the rewards. So, now I get to the question I was postponing a couple of times earlier in this post.

Rewards

How do you see the payout of the rewards?

If we have a request to store a chunk, with a price P that the user is prepared to pay, and we have N vaults with a sum of bids that, for simplicity, equals P. Then what? At every GET request for this chunk, a reward is paid to these N vaults.

  • How much is paid?
  • How much is paid for the 1:st GET, the 100:th GET, the n :th GET?
  • Are the elders now supposed to track P for this chunk forever?

We don’t know how many times this chunk will be requested. Maybe it is part of the most popular piece of content in human history. Maybe the value of the network currency was much lower when the chunk was stored, than 20 years later. So if the sum of the winning bids was P, and the value of the coin has gone up 3000 times in 20 years… Are they still paid P, even though currently chunks are stored at ~P / 3000?

I don’t think it is a good solution that elders keep a map of every single chunk and their P.

In PAC, elders only need to keep track of the current bids of vaults in the section, which would be at most 120 entries. The number of chunks in a section can be very large. It doesn’t seem very efficient. We’ll have to get back to this question.

Then there is the question of long term overall network balance. If the sum of all store costs are supposed to eventually equal the sum of all rewards (there’s no other way to eventually reach an equilibrium of issued coins), then the concept of read:write ratio has to be used. How many GETs are there per PUT?

I have before mentioned one way to deal with this: always expect that the number of GETs a chunk will have during its life time, is corresponding to the current read : write ratio of the network.

If for every write, there are 100 reads, that means an average chunk can expect 100 GETs. Now, the distribuition of popularity will make some chunks have 1 trillion GETs (or more…, just as an example) and some 0. For a vault with millions of chunks, this would however even out, so they would – with something of a normal distribution probably (especially since data is now not placed by hash, see @mav’s findings in chunk distribution) – also have close to 100 : 1 read write ratio if the network does.

So if I store a chunk today, and read write ratio is 10 : 1 (early network, everyone is uploading like mad), and a few years later it is 100 : 1… how is this reconciled?

What I have suggested, is that rewards should always be calculated based on current read:write ratio.

A section knows current read write ratio by [having elders] simply book keeping the PUT and GET requests and calculate the ratio. It’s a very simple operation, the data size to store this information is practically nothing.

So, to go back to the example:

We have a stored chunk, the winning bids of the N vaults summed to P. At some moment, the chunk is requested, and the elders at that time have a read : write ratio of 100 : 1 registered.

So, let’s say we did use the seemingly inefficient way of storing a map of chunk and P, then we would reward a total of R = P / 100 to the N vaults, and each vault would get R / N.

But, if we 20 years later have 3000x increase in valuation, all the old chunks would generate absurdly large amounts of reward.

Maybe that isn’t a problem or a bad thing. After all, the old data is probably dust compared to all recent data, and probably never requested either. So maybe not a bad thing that the reward for holding it is actually increasing as the value of the currency increases. That is a good motivation for vaults to get in early, and make sure to hold the data forever.

And maybe it is a problem. What do you think?

Security

How can this system be gamed?

Summary

I actually think this is a very different idea than PAC.

We we’re talking just now of renaming PAC, from perpetual auction currency , to particpant assessment currency, because it is not exactly an auction such that it would be thought of by many. Auction leads the thoughts to market economy and competition. While there is some sort of competition, the core of the idea is more of a contribution to the network, in assessment of the common anticipation of the value (a price discovery action), and those who do the best job of this assessment get rewarded.

This idea however, is a very typical auction, very typical market economy. I actually think the name perpetual auction currency, better suits it, while the original PAC should be called participant assessment currency, with risk of confusion… :slight_smile:

[The original] PAC doesn’t change anything in how data is handled. This idea is a bit more invasive in that way.

All in all, it provides a couple of unique properties that no proposals so far have. I see no obvious security flaws at a quick glance. It is presented as less complicated than PAC, which seems true for the security aspect, but considering the invasion on current storage logic, there seems to be some complications popping up there (maybe more, maybe less).

4 Likes

It doesn’t add the type of complexity you assume. We take M > N vaults the same way (closest XOR distance) and just not use all of them, only the lowest N bids. Sections already need to store which vaults store which chunks because the network evolves so “closest XOR distance” is a moving target that they can’t use to just recalculate the location each time.

We already need to deal with this if the user doesn’t have sufficient funds. No additional complexity here.

I don’t want to water down the original argument, so let me mention this just as a comment. Specifying a maximum price has the additional benefit of removing the potential surprise for when a user would be charged a lot more than they expected. It’s important enough that we’d need to address it some way sooner or later.

It would have to be implemented in the Browser. Users could just specify a safecoin or dollar/euro/etc amount in the browser and be safe knowing their money will not run out due to some fluke, flash-crash, or other unforeseen event.

Again, we’re not talking about something that we don’t already have to deal with.

You have an important point here and later on about GETs. I’ve been mostly away for some time (and probably continue to be so as I’m quite busy these days) so I lost some of my intuitions about the network and forgot payment is for GETs not PUTs. :sweat_smile: It is an interesting aspect to the network but it does make it real hard to build a free market around it…

It certainly invalidates most of my points and I’ll refrain from monkey-patching it (which means I can’t answer to many of your questions that assume rewards for GETs only). Instead, I’ll simply propose also rewarding storage directly. I know it’s a big one, so bring on the heat :smirk:

That’s the best thing about markets. We already assume everybody will be trying to “game” it (come out on top) and then we turn that around to be the very mechanism to find the correct price.

Compared to a synthetic price-setting mechanism that isn’t based on a market, because it’s harder to game something where we already assume aggressive assertion of self-interest v.s. following a rule because otherwise we’ll get punished.


As I wrote it above, I believe it would made a lot more sense economically if at least some of the payment went directly to the vaults when data is stored. I can’t imagine what else than a free market could reliably align the self-interest of vaults and the communal interest of the network, including features such as saving space by pushing the price higher in the expectation of larger profits in the future.

I don’t think much work has been done about something important that’s related to payments for GETs. Realistically, most chunks will be stored once and requested a handful of times if ever while a very few chunks will be requested billions of times and the vaults that end up storing them will win the jackpot. Small vaults will have a very low chance not only of winning but also to just fare well as the sample mean (the average payment a vault of a certain size receives) of such a Paretian distribution approaches the real mean (the average payment of all vaults on the network) rather slowly, only when the number of samples (chunks stored) grows big.

Effectively, small vaults can do better by dedicating much or all of their storage to just caching popular chunks. This suggests maybe we should pay first for PUTs to encourage the storing of new data and then for GETs as well to encourage caching popular data. It would also introduce not only the possibility to build a market for new data but also flexibility for users to make the most out of their vaults. I think I’ll start a thread about this.

4 Likes

Well, it did as I had no idea :sweat_smile: Paying for PUTs would make so much more sense. I can imagine something like paying out some of the money to the vaults right away and burning the rest. That way, vaults could still farm coins as a reward for GETs, that is, for sticking around. It would also stay to be the mechanism for coins to be put into circulation.

2 Likes

I recommend the four pages on complex systems and how they fail (linked here by Arvind) to anyone interested in the area and how to design SAFE Network to be robust, such as the Perpetual Auction discussion:

4 Likes

Following these threads another idea comes to mind (and I apologize in advance if this has been proposed and I missed it). We definitely need to reward both GETs and PUTs otherwise someone could potentially open a vault then just delete once full and start over. I.e. need to reward vaults adequately for staying. However, I’m not sure it will be a good idea to allow a total windfall for high traffic blocks of data. What about having a decay on the reward for GETs? For example, the GET reward is = MAX * (factor)^n where n is the number of times the block has been accessed and factor is something like 0.99. This would require some thought as e^(-n) or even 1/ln(n) may provide a better decay shape to manage the reward. I think something like this will be important, though, because we want to incentivize vaults to store data indefinitely even if accessed infrequently. On the other hand, some data may be accessed millions of times and that shouldn’t get a million times the reward of data accessed once in my opinion. Also, maybe a time-based reward for data that is never accessed? I guess depends on the data block size vs. the vault size. In the early days I could see a lot of PUTs and few GETs and we want to make sure vaults stick around even if they aren’t lucky enough to have frequently accessed blocks in them.

2 Likes

Yes, because with pay on PUT only, you could just receive a bunch of data, then disconnect, dump the data and connect again.
I mean especially with this bidding system. You could under-bid slightly and fill up quickly.

@JohnM, no need to make it granular and complicated. It doesn’t need to be connected to the GET. If vaults can’t serve the GET they should be punished.
With that, all excess from the PUT (only pay receiving vaults a share proportional to current read write ratio on PUT) can be split among the others in the section, and that way, there is a reward for staying around as well.
As an example.
So, it doesn’t matter if you have a very popular or unpopular chunk. Receiving a chunk pays a small part of the PUT. Staying around lets you take part of all excess from others receiving chunks. Serving the chunks on GETs lets you stay around (since not serving is punished).

The end result will be the same: there’s incentive to stay around and keep the chunks.

6 Likes