Potential misconceptions with the safecoin economy

TLDR: StoreCost is a rate limit, not a fee for service. Rewards ensure permanence, not security. Membership rules ensure security.

Regarding storecost is a rate limit, not a fee for service:

In the past I’ve said things like storecost infuses chunks with their future costs which implies some sort of idea of unpopular-data-pays-for-the-needs-of-popular-data. Since then I’ve changed my mind.

Storecost is not a fee for service because

  • We don’t know the cost of services into the future.
  • Costs for different operators will vary by location and labor and experience and equipment etc.
  • Costs for each chunk of data will vary by popularity and geolocation and age etc.

So storecost can and should only be a form of rate limiting or spam protection.

To illustrate why this is, imagine the hypothetical scenario of no storecost, upload is free and only constrained by the ability of vaults to process it.

In this case there would be a queue of data. Maybe it’s a first-in-first-stored situation. Maybe it’s randomly selected. Maybe all new data is accepted and the most unpopular old data is dropped as the new data comes in. Whatever it is, the missing element of all these is a consistent way for vaults to decide whether or not to store new incoming data.

Storecost is hoping to solve consistency with decision making, ie allow the client to attach a priority or value (via a fee amount) to new data (or to change perspective on the same idea, allow the network to specify the threshold for spam / not spam).

So I guess what I’m saying is storecost should be as low as possible so that upload is economically constrained (ie by value) rather than technologically constrained (ie by bandwidth or disk space constraints). We don’t want to allow absolutely everything, eg random data, but we do want to allow as much valuable data as we can. Storecost allows setting the incoming data flow as high as possible but not so high it damages the network.

A related tangent: the initial constraint on upload will probably be the difficulty/ease of obtaining safecoin. The additional cost/difficulty/friction of obtaining safecoin will prevent a lot of data from being uploaded compared to, say, a free-but-rate-limited situation. But as safecoin becomes more commonly available and the acquisition and spending of it simplified then the storecost will become the only limit on uploads, rather than the other safecoin-related frictions. I think the current UX ideas for payment and onboarding are fantastic which helps reduce the friction when a user takes the journey from initial discovery of SAFE to uploading data.

The current design of storecost does indeed aim at this since it’s based on spare space which is probably the main factor when deciding how much to rate limit.

Regarding rewards ensure permanence:

Since storecost is not actually funding the ongoing security and delivery of SAFE network services, this must be done by the reward.

It’s very important that the reward should not bias in favour of popular data. It should reward the retention of all data equally. Data fungibility.

Pay-for-GET (in the simplest form) would mainly benefit popular data. Farmers would be doing a cost benefit of keeping unpopular data. At some point that unpopular data would be worth dropping. The reward mechanism should ensure that it never becomes viable to drop unpopular data (and this is why it’s important for the storecost to ensure uploaded data has some initial value hurdle, so that the network is justified in keeping it).

burstcoin uses storage in the form of computed farming-data where the key idea is farming-data is relatively costly to obtain but relatively cheap to utilise for farming so farmers store it rather than continuously re-obtain it, hence proof of storage. Data chunks in SAFE are similar in being relatively costly to acquire but relatively cheap to utilise for farming. It’s important that a) all data is utilised approximately equally for farming and b) only legitimate data is possible to use, ie you can’t use cheap home-made data for farming, only costly client-uploaded data.

This insight into the reward mechanisms and the relation to storecost is important because it shows what not to do. We can’t treat storecost as a benefit to farmers. We can’t treat rewards as a type of fee-for-direct-service but as a fee-for-aggregate-and-future-potential-service.

Some ideas to illustrate this concept

  • rather than reward a GET request for a specific chunk, reward when that (probably popular) GET request is used in combination with some existing (probably unpopular) data. This should make unpopular data equally accountable and useful and valuable as popular data for the purpose of rewards.
  • the frequency and size of rewards should be matched to the rate of change of the network, which is subject to unpredictable rates of change in participation and technology. Upload is constrained by total available bandwidth, relocation and joining new nodes takes time, and rewards should reflect these temporal constraints. GET and PUT volume will vary through the days and the years. Availability of new storage will go through famines and gluts. The reward should be responsive to these things while primarily ensuring the availability of all previously uploaded data. Note that ‘availability’ is mainly ‘can it be downloaded’.

Questions to ponder:

Are the rewards and storecost also related to security? I think a little, but mainly the membership rules (join / disallow / relocate / age / punish / kill) are the biggest factor here, maybe as a gut feeling about 90% of security comes from the membership rules and 10% from the reward/storecost mechanisms.

Where should the storecost go? To the initial vault handling the upload? To the final ones storing it? To all vaults on the upload route? To the network as a whole? I think the existing recycling solution (ie storecost is given to the network as a whole to give as future rewards) is a pretty good approach.

How should the reward mechanism be designed to cover all data rather than only popular data?

Should it be possible for storecost to get to zero? If not, how close to zero could it get?

What tools can we give farmers to best understand how to run their operations? For example, if they have lots of spare disk space but their bandwidth is maxed out, how can this best be explained to them?

Can spare space and spare bandwidth and spare compute be measured in a useful way? Or is it only useful to measure stress / failure results and aim toward a certain degree of stress?

If storecost is a rate limit, how does it work for vaults with varying abilities? Not all vaults would want to rate limit the same way, but storecost is a kind of universal rate limit.


As per the topic title, these points aim to clarify potential misconceptions with the safecoin economy, but may themselves be misconceptions. What do you think?

17 Likes

What are your thoughts on say getting paid for gets, but the reward amount is based on a factor of how much the vault is storing and the get itself. So a max is worked out for any get and then each vault that responds to that request in a valid way get upto that max. The actual amount is worked out on some algo based on the amount of data being held. This is like a “value to the network” of the vault.

Maybe
let x = max for a get and is worked on the farming algo
let y = average vault size
let z = individual vault storage
let w = z/y limited to a max of 1

reward = x/2 * ( 1 + w )

With cost to store goes back to network

5 Likes

Yeah it’s possible this would work.

My main thought for rewards (in general rather than specifically about your idea) is to ensure the security of chunks. So the main thing missing from the specific idea you propose would be how it makes sure unpopular data is not dropped. As in, rewards must be based on specific data, not just the stats about the data.

One example (maybe not workable in reality) would be to reward based on the result of sign(original GET + some other chunk data) being below a certain target threshold (ie farming difficulty). So a vault would take each GET they receive and iterate through all their chunks seeing if it combines into a reward.

It may seem a bit too much work to do this for every GET, maybe it would be too much, but my point is this method ensures all data is potentially useful for the reward so is all worth keeping. And no individual data is more valuable or useful than any other. The GET becomes merely an event trigger, and the specific data of the GET is isolated from the reward probability.

Also note the work uses the other chunk data, not the name, so the node must actually have all chunks not just keep a record of the names.

Zooming out from the details, I feel reward should aim to retain data, which requires operations that may include any chunk rather than just popular chunks or only stats about what the vault has stored.

9 Likes

The only guaranteed way to accomish this is to audit all vault chunks and reward vaults for passing the audit. This also will protect the network from bit rot on vaults operating in good faith.

4 Likes