Perpetual Auction Currency

Is there a method currently that vaults could serve cached chunks? I’m unsure how else could such behavior be encouraged other than rewarding GETs not just PUTs.

1 Like

Absolutely. There has been some preliminary debate on if chunks served via cache should be rewarded with a farming attempt, or if the cache denying the vault actually holding the chunk a farming attempt is enough since that would result in lower inflation.

1 Like

Lowering inflation may sound like a noble cause but it doesn’t answer why any single vault would go the length and cache anything.

2 Likes

I recall David arguing that it would be less work for a vault if something is in its cache than having to forward the request and pass back the result.

Another, perhaps small incentive is that by reducing the reward paid to another vault it very marginally increases the rewards for any requests for chunks it hold itself.

I don’t know of any analysis of those factors though.

3 Likes

Not only increasing the value of future rewards, but also preserving the value of any existing holdings. Another factor would be that it would improve the performance and utility of the network, which would also presumably increase the value of current holdings and future rewards.

3 Likes

These fall squarely under the tragedy of the commons.

That only matters if we keep rewarding GETs too. Zero for less effort is still some effort for nothing.

But I think I wasn’t clear that the reason I asked about serving cached chunks was this.

4 Likes

Perhaps the biggest benefit of caching to the economy and rewards of SAFE is not the direct change to rewards but the long term survival of the SAFE network.

Without caching the response times for highly requested pages would make the network seem slow, very slow, I mean very very very slow. Image the landing page for google search (yea I know it maybe another search engine) and with a billion users on safe there maybe 10K to 1 million hits on that landing page at any one time.

The landing page is less than 1MB so there is only 8 copies on the network with up to 10K and much more requests per second every second the vaults and section would be swamped and may even have trouble keeping up.

With caching there is no issue since that page would be permanently cached around the network. Increased response and increased satisfaction of the users.

So in conclusion I suggest the greatest benefit of (free) caching is the long term survival of the network and the increased adoption of it. And I would suggest this exceeds any concern for it being free and the effects on rewards by a huge margin.

6 Likes

Caching is important. If there is an incentive to provide cache, than it seems likely that every node would cache every chunk that comes their way (to the extent of the resources they have at their disposal). This should improve the end user’s experience due to the higher performance. What about a reward scheme where all the nodes that provide a chunk get rewarded? The reward could be scaled based on the order in which the chunk is returned to the requestor (1st place, 2nd place, etc) and the XOR distance between the providing node and the chunk. That way, cached chunks who might be able to provide it quickly get a partial reward, and the chunks actually responsible for storing the chunk (closest XOR distance) also get a partial reward even if they come in 3rd or 4th or 100th place in the provider race. The ratios could be weighted in a fuzzy way to favor the actual vaults which store the chunk, but at least the cache nodes could get some reward too.

9 Likes

Sorry for being daft but I don’t understand what you mean by “free” caching. What would “paid caching” look like? That is, who would pay for it, at what point, and so on?

1 Like

Just making sure people understand I was talking of the current proposed system

1 Like

In a previous iteration of the network stored chunks were transient. Moving around the network like dust in the wind.

Currently chunks remain in vaults indefinitely unless downtime occurs. In which case the vault in question is punished severely.

If data in a vault becomes stale then the operator has 3 choices:

  1. Continue hosting stale chunks for little to no profit.

  2. Gamble and restart the vault and aquire new chunks at the cost of node age.

  3. Buy more storage and hope the chunks they accrue become popular enough to net a profit.

The previously proposed system seemed more fair and profitable. As chunks occasionally moved from section to section their chances of hosting popular chunks remain constant. Meaning no profit stagnicity could ever occur. This of course means more complexity.

Maybe @maidsafe could update the network to this end

2 Likes

Perhaps another solution could be caching helping to boost a node’s rank or age, instead of getting paid for it.

Just another reward mechanism to consider for edge cases like this. Safecoin payment isn’t the only thing the network can use as incentives.

3 Likes

Trickle payments for cache hits would help alleviate this, with big rewards for non-cache hits.

It’s hard to see a way around this one. From a farmer’s perspective, the ideal situation is one where they are able to announce that their data is stale and that they would like a reshuffle. I don’t see any real benefit for the network if it accommodates this behavior, since it requires more bandwidth, computation, and complicatedness. Losing half one’s nodal age seems a bit harsh though. Perhaps just a loss of 1 age range?

The initial pay on PUT mechanism in addition to pay on GET also help incentivize additional storage purchases. It also makes your points 1 and 2 less of a problem because participating farmers are guaranteed a steady stream of rewards as long as they continue to accept data. The challenge is getting the right ratios such that:

PUT rewards < Cache Rewards < GET Rewards.

Each one of these could have a fuzzy distribution as mentioned in a previous post.

IMO safecoin is the only good means for incentives, (ie. “the carrot”). All other “incentives” should be punishment (ie. “the stick”).

1 Like

Doesn’t aging cause relocation of the vault into another section thus a flush of the vault contents.

2 Likes

I feel rather silly for forgetting about this option/feature. However, there are bandwidth related issues. I suppose it just places more emphasis on the use of smaller vault sizes. Presuming a reasonable vault refresh time of ~60 minutes and a ~100Mbit connection gives us max vault sizes of about 50GB. That’s a lot of vaults behind one IP address to fill up a 128TB hdd array… The situation will only get worse as storage cost reductions outpace connectivity/bandwidth.

One way to alleviate the problem for well behaving vaults would be for them to smoothly transition in and out of a section. Rather than a hard flush, move, and reload of the entire vault, it could be done chunk by chunk on a 1:1 swap. Consider a vault that has just increased it’s nodal age and is a candidate for @mav’s secure random relocation. The node takes each chunk currently in held it’s vault and passes it to the next nearest neighbor in the section. For each chunk or set of chunks transferred they request the same amount of chunks from neighbors in the section they are relocating to. What this means is that the vault would be operating in both camps for a period of time until the process completes. This process might let 10TB to 20TB single vaults become feasible on a 100Mbit connection if a 1 to 2 week transition period is tolerable to the network.

Going back to the idea of minor Cache hit rewards it may be advantageous for network performance and farmer rewards if the vault keeps a copy of all chunks it has ever come across, regardless of the section it is in.

2 Likes

That’s great in the beginning. As nodes reach higher age tiers it seldom happens. Eventually reaching a point when relocation can take years IIRC.

2 Likes

IIRC it is based on a doubling of network events. So as the network increases in popularity/use, maybe your real-world years become months again? Regardless, IMO it’s better for the network to keep unnecessary reshuffling to a minimum.

Another layer of the incentive mechanisms necessary to keep farmers happy would be weighting the reward algorithms to favor node age before beauty. Likewise sections could consider “chunk age” when disbursing farming rewards. If a “section age” is stored with each chunk as metadata when the chunk is first PUT on the network, then a comparison could be made to the current section age when a GET request is made for it. This difference in section age measures defines the age of the chunk. Old chunks could have a higher reward weight than newer chunks. A huge reward for cold chunks will make an old farmer happy.

2 Likes

IIRC it already does.

This sounds good.

2 Likes

But things that boost rank / age faster allow that node to earn more safecoin later, so it’s part of the same vein

2 Likes

Really cool idea @JoeSmithJr and @oetyng has addressed a lot of my own thoughts.

How does an uploader know (approximately) how to set P? How is the storecost(s) info exposed?

May I request the word ‘price’ is replaced with ‘storecost’ in the future since it’s easier to comprehend. Price is ambiguous and context dependent. Not a big deal, I’m definitely not perfect with usage, but it’s a handy habit if we can get into it.

What happens if there’s not enough nodes for that storecost? There are two interesting cases

One where there’s some nodes available for P but not the full desired for redundancy; is the chunk accepted with low redundancy or rejected? This is interesting because it allows uploaders to signal their storecost preference as a counterforce to nodes naturally wanting to increase storecost if possible.

Secondly where there’s no nodes available for P, what happens to the chunk and is there a message to the client letting them know they failed to store?

Yeah I’m pleased to see this becoming more the aim rather than a hand-of-god type algorithm. Both approaches are important but I’m concerned about having too many metrics built into the economy.

I’m not sure about how simple and minimal it is to change the chunks no longer being in a close group. That feels like a very complex situation, or if not complex at least high overhead. Keeping a routing table for all chunks rather than using an algorithmic destination seems risky (I’ve written more about the simplicity of the algorithmic destination below).

Maybe the exact nodes holding the chunk doesn’t need to be known. A GET request reaches the section and simply anyone can respond. But then accountability is lost, right? If nobody responds then who should be punished? There are times where it’s correct for nobody to respond since the chunk may never have existed. This seems hard to overcome other than keeping a list, which seems like a lot of extra work.

To me the loss of close group destinations for chunks adds too much extra work to the network. I know I should quantify the amount of extra work to justify if my gut feeling is correct or not, but I haven’t.

No, at any point in time any chunk location can be calculated exactly if the routing table is available, no need for storing location. Location is a function of the routing table.

fn get_locations(chunk_name, routing_table) is deterministic. If the routing table changes chunks are automatically adjusted to be in the new locations returned by get_locations.

This calculation is very fast since it’s based on XOR operation and should be much lower overhead than storing an index. This specific feature (close group storage) is actually a pretty big part of what made me feel this can scale extremely well, more than others like Storj that keep an index.

I think we’re going to have to accept that low frequency chunks are going to essentially subsidise the high frequency chunks. That’s the eventual consequence. If I upload a chunk for my own private backup that never has another GET, I have to accept that in doing so I am paying for another different chunk to be able to be fetched by someone else. I think that’s ok but ‘unpopular pays for popular’ is a concept to keep in mind. I think it’s nearly three-quarters of chunks will probably never be fetched twice.

It needs to be made clear this is not unfair, since it may come across or be spun that way.


In a general sense, it seems there are two forces pushing the storecost cheaper, one is farmer competition, the other is client desires. Should the economy account for both these forces simultaneously, and if so how? I like how this idea allows both voices to be heard.


The previous rfc-0012 style idea was to have storecost set at the section level, so same storecost for chunks in the same section but different between sections.

This takes storecost to the per-node level, so different nodes have different storecost. I wonder if this makes storecost more predictable / workable / useful / stable, or less?

Also from an uploader perspective is this a better result or a worse result, or the same?

I would prefer to see a single global network-wide storecost if possible (can still change over time but is globally consistent). Imagining storecost becoming more local rather than global is a bit frustrating to me. Just expressing my bias rather than saying it’s an objectively good or bad thing, maybe global storecost doesn’t improve anything.


How do relocations work? In a close group scenario it’s clear if a node is or isn’t responsible for any chunks in their new section based on the nodes near to it. But now, when should a chunk be given to this newly relocated node? Or should it relocate to empty and gradually fill again? Where do the old chunks from this node get distributed? Maybe automatically assigned to the next cheapest and ‘paid for’ by the network as newly minted coins rather than client upload? Sort of like the client pays for the initial storage and the network pays for the perpetual ongoing storage?

3 Likes