How will the network determine over/under supply of storage?

We have discussed this question before, most recently here:

You are going to have to write without knowing the solution I think. The community often have debates about these kinds of issues, and then MaidSafe come in with their code (sometimes with community influence, but usually better :wink:) and then that can evolve further during testing, until we get to find out how this network works!

I look forward to reading your article. Good luck.

6 Likes

Thanks for linking that @happybeing, great to see @mav asking the exact same question.

@Traktion, I believe the economics behind safe will be structured around one thing. A magic line in the sand that defines a point (% oversupply of storage) that is optimal. @polpolrene mentioned earlier in this thread that this could be 30%.

The network dynamically adjusts farmer reward in order to hit this target. People have mentioned the farming rate will be ratcheted. (I think this approach is sub optimal and that in reality put cost needs to be the lever - I will address this when I write my blog piece).

Either way, the network needs to rely on the storage amount vaults are ‘pledging’ to the network as being correct. @Traktion I understand we don’t want to involve any element of trust here, and I’ve also come to the conclusion that this arrangement is gameable unless vaults are unable to alter the amount of storage they are ‘pledging’.

Vaults will have to state how much space they are willing to offer the network when they join and not be able to change this amount. (There will also need to be a maximum limit).

2 Likes

Agreed, but what defines this line in the sand is still not fully defined. A number of elements will likely contribute to this. The core desire is to ensure there is sufficient storage at the best price.

I think not allowing adjustments after the initial pledge would be way too restrictive and ultimately detrimental. Although I do see the potential problems you are bringing to light.

I would suggest freezing out any rewards and not receiving any new chunks for some period of time (maybe something like 24 hours) after every change.

1 Like

The network cannot work based on intentions or declarations from the vaults. This is a basic design from a reliable autonomous network.

The use of sacrificial space have several advantages. Give a good indicator of the storage supply, create a buffer space available immediately in critical situation and makes it difficult, for someone with huge resources, to influence the farming rate for their benefit.

4 Likes

@digipl where can I find out more about sacrificial storage? I saw it mentioned above as well. How does it solve the issue I outlined?

There are some explication in this topic and in the David’s answers of this RFC proposition.

5 Likes

Thanks @digipl, that was useful reading. @neo appears to think that sacrificial data is no longer intended to be used (it doesn’t seem like a clean efficient solution to the problem anyway).

So it appears that the economics of the network are far from being set in stone. Looking forward to seeing how this plays out.

Either way, I’ll throw my 2c in on how I think the show should be run.

2 Likes

For the moment they have gone away from sacrificial chunks as a method, but I am sure they will have to have some form that implements similar or come back to it.

I seem to remember reading that a new vault will be told to store data in all its space with specific data and through crypto challenge where the vault has to retrieve some/all and create a hash in a certain way to prove it has that amount of space. This at least shows the vault initially can store that amount and isn’t overstating things. This would be important to determine available space. The the vault is penalised as normal if it cannot store that amount. Remember vault is not paid according to reported size but only when it retrieves chunks.

Of course you can adjust but it probably requires a vault reset and lose some of its node “age”. In effect this gives what you suggest (maybe longer). This works very much against quick changes.

Then again you could simply run up a second vault when you want to increase space significantly instead of increasing the first vault. Potentially you could earn more too.

But if your vault is not anywhere full then increasing space is not going to benefit you till the original space is used up.

EDIT:

I agree that sacrificial chunks was an easy way. I guess they are trying to get away from network traffic/work where extra chunks are stored and moving away from it by going to a vault challenge to determine maximum space in the vault. That way a simple command to the vault has it fill up the vault with data calculated from the command then get the vault to run a crypto algo over the data and return a result. This way the total usable space of the vault can be proved. And if later found false then that vault is deemed bad and not used.

NOW the question is “What will be used in the end?” I’d say sacrificial chunks has a very good chance of “winning” out

EDIT 2: I cannot find anything about what I said above so feel free to take it as fantasy. The RFC still says sacrificial chunks and I did find from last year (old) a comment by David about using data chains for this but cannot see how this would actually solve the free space determination.

5 Likes

How about having the network not have to try determine the amount of free space, and instead attempt to fill the entire available storage available with duplicated data. And then the network farming intensives would target a specific amount of duplication. When new data is put to the network, vaults which are assigned this new data would discard some other data which is further away to that vault in xor distance, and therefore making a small reduction in the amount of duplication.

3 Likes

This is the principle of sacrificial chunks. The difference being that you don’t have to completely fill vaults, just a couple extra chunks above max copies of each chunk is enough

3 Likes

I’m very interested in reading this. Please post a link when it’s online!

4 Likes

I’ll second this - will be great to read.

The economics of Safe could greatly impact the degree of its success.

As the technical hurdles get ever smaller, more focus should shift to making sure the economic model is good, and also sufficiently flexibile to be tweaked post-launch if needed.

1 Like

So, to earn peak, I’d profit from having multiple accounts/VM’s with relatively small vaults, instead of one larger account with tons of storage that would take ages to hit that 30% trigger?

2 Likes

Yes, SAFE isn’t about the more storage you provide the more you make. It’s more like: the more Vaults you run the more you make. And the longer they are online the more you make (node ageing). But things could change, if more and more people start Vaults because of good prices, the lower the Farming Reward becomes. What would you do in that situation? Stay online without making a profit? Or go offline and start all over again with a node age of 1 after a full restart?? Lot of dynamics involved here.

6 Likes

I’d stay online without making a profit, knowing node age of 1 would earn me less than a node age of 100+. Unless there’s other forces at play.

It sure sounds like virtualised instances are going to be key to earnings, so that sharding one operating system into 10, or 100 or 1000 with smallish vaults each, will be the way to maximise earnings, which will be the primary goal of those trying to monetise their involvement here.

Huge bandwidth, uptime and storage, but with the storage split into many instances of good uptime and moderate size.

I suspect we will only really know when vaults with safe coin are live. I am sure the algorithm will need refining over time to balance out incentives.

1 Like

That 30% @polpolrene is only a approximation from the RFC algo.

That 30% is network wide, not your vault.

You are only paid on the chunks you deliver using what some refer to as a “lottery”, but its a deterministic mathematical algo

True, but there are limits of course. Bandwidth being one, cpu usage is another when you have a lot of vaults, especially when many node have to do a lot of work momentarily.

Also remember that a vault is also a node, so each one will be caching content. Each one will be routing chunks as they travel through the network, so the bandwidth width from the vault itself is a small part of the total bandwidth that a vault===node will be doing.

So there will be a sweet point for the right number of nodes.

Also a large vault will eventually be serving up approx as much chunks as many small vaults of equivalent size.

5 Likes

This is a really interesting and difficult question which has been on my mind a lot.

While I don’t think we can trust a document from 2015, it’s the best we have just now; RFC-0012 Safecoin Implementation says the allocation of new safecoin depends on the ratio of Sacrificial Chunks to Primary Chunks.

“we want the [farm] rate to increase as we lose sacrificial chunks”

Sacrificial Chunks were used as a measure of spare space (ie supply), but since sacrificial chunks are no longer in use rfc0012 is not currently usable. The intention still applies, just not the exact design.

A way to measure spare space (supply) is required for the network to ‘balance’ supply and demand. Currently there is no way to measure supply.

The Chia Network uses Proof-Of-Storage which is described in this very technical paper: Beyond Hellman’s Time-Memory Trade-Offs with Applications to Proofs of Space.

We construct functions that provably require more time and/or space to invert.

The idea is to use disk space rather than computation as the main resource for mining.

a cheating prover needs Θ(N) space or time after the challenge is known to make a verifier accept.

V challenges P to prove it stored F. The security requirement states that a cheating prover P* who only stores a file F* of size significantly smaller than N either fails to make V accept, or must invest a significant amount of computation, ideally close to P’s cost during initialization

This is a way for a vault to prove they have a certain amount of spare space which can be easily verified by the network. It’s like proving a certain amount of hashing has been done when matching a specified prefix (ie PoW).

This is really fascinating since it lets the network measure the supply of resources, which is critical in balancing supply with demand. This allows the economic model of safecoin to function as intended…

I’m still working through the details but wanted to put it out there so others can investigate and comment on the potential as a measure of supply.

8 Likes

Perhaps gauging free space is a fools errand.

Presumably people will say they have more space than they do in order to throttle farming rewards, potentially causing existing safecoins to be worth relatively more. This plays into the hands of those with lots of coins already.

For those without many coins, presumably they will want to increase the reward output. They have little coinage to inflate, so diluting the supply wouldn’t hurt them.

So, do we have opposing factors here already?

Thinking about the incentives, the more supply is constrained, the more the price should increase, the more coin poor farmers will want increased rewards. Moreover, coin poor farmers may have just offloaded safecoin specifically before switching strategy.

Of course, there will be many in the middle who are just honest and not trying to game the system. Presumably they will be reporting space correctly, which may give an anchor to the supply algorithm.

Maybe something more robust would be ideal, but if there is some sort of damping in place, wild swings would seem less likely.

3 Likes