Tragedy of the commons


#1

Has there been much discussion on this? I couldn’t find anything specific when searching so my apologies if I missed it! Given safe network storage fee would be one time and permanent, how does this prevent tragedy of the commons from occurring?

If I a farmer today store some data and receive one safecoin, then tomorrow stop being a farmer, the overall load for the farmers of tomorrow will have increased (I.e existing data needs to be stored immediately in any new vaults coming online). Existing data will have already been paid for so new farmers would have a greater upfront cost in order to be able to accept new puts to receive safecoin.

Now I know we will have deduplication and resources are predicted to continue to get cheaper over time but I don’t think that completely removes the problem if there are no persistence renewal costs?


#2

I understand the point you’re making and there are other discussions about this elsewhere, but the quoted sentence indicates something you don’t seem to have quite right.

Safecoin is awarded to farming vaults on the basis of GETs rather than PUTs. So a new vault coming on and acquiring data to store for the network is immediately able to fulfill GET requests, and thus have a chance to receive safecoin. I’m not sure that it will be immediate for brand new vaults, what with node aging factoring in, but the point is that the more data a node is storing, the more chance it has to earn, even is that it acquired by relocation from other vaults.


#3

All is profit-driven. It is more profitable to play by the rules than “attack” (quit). So the whole thing will be very organically finding balance, failing only in the case if there is no benefit to be realized from using SAFE.


#4

Wouldn’t this lead to higher churn rates though? If I’m a farmer I don’t want to be stuck with a bunch of ancient “junk” chunks that nobody ever accesses as I’m paying to store these with no financial return. Naturally over time the amount of junk chunks on the network would continue to grow and these would only be paid for one time. How can we quantify a cost which lasts forever without truly knowing when or if we will hit a point where resources getting cheaper can no longer outpace it?

Couldn’t this be solved easily by mandating a get request for data every couple of decades or the data expires?


#5

Also, I think the network adjusts the payment for the get based on how much reserve space is mandated … it also adjusts the cost of the put to keep the safecoin recycling going.

So if farmers go into short supply, the rate paid for gets and puts goes up.


#6

But this just means over time the cost of network resources would go up and up? Assuming we hit a point where resource advances can no longer outpace it at least


#7

The cost of puts may go up … depends on many things … including the value of safecoin itself, but also costs of storage space, bandwidth … then there is deduplication …

it’s complicated and it is just another experiment in the end.


#8

There are good arguments for allowing data to die after a period of time. I don’t know that anyone has the answers here though about what will definitely work or not work. Please feel free to run some numbers. People have looked at this many times on the forum. Do a thorough search and you will find a fair amount of discussion.


#9

Persistent data has never been offered before, so no-one has any clue what the value will be. It could be that in a decade no-one is stupid enough to pay for non-persistent data.


#10

true but it would be nice to have that as optional i.e. some data I don’t care about losing within a couple of days and wouldn’t want to pay whatever the permanent cost economically comes out to be if I could pay less to have it accessed a few times and then forgotten about


#11

There are many types of data on safe network. Only one is persistent AFAIK.


#12

Sooner or later there will be equilibrium where price of PUTs will be equal to all time costs of that operation. Thanks to Moore’s law storage costs are halving every 2 years. So the price is aproxiamtelly 1 + 1/2 + 1/4 + 1/8 … 1/n = 2 . For every 2 years for each part of the sum. So the rough estiamtion is 2x2 = 4 years costs at today prices = All time storage costs in future prices. If there are 8 copies, than 8x4 = 32. Alltime storage with 8 copies should be 32x more expensive than 1 year storage with 1 copy. This calculation is very simplified. Price calculation function should be integral, not sum, since price drop function is continuous. There are factors like spare HDDs of users, which are cheaper. But the idea of this calculatio is, that future infinite costs of storage should be very cheap (4x of year costs) and the model where people are paid for GETs not for PUTs allows people to earn money even on very old content. Churning for searching for more attrractive content to get more GETs is not worth that, since popular content is equally distributed and mixed with old and forgoten.


#13

The price drop function is continuous so long as Moore’s law applies. Eventually Moore’s law will hit the limits bound by the laws of physics, we have some promised advancements like graphene based solutions or even quantum computing but that’s not feasible yet and what comes after that? Nobody is going to pay 32x more, even businesses that need to maintain records forever are not going to be thinking that far ahead as that wouldn’t be economical.

I’m not attacking the safe network, it’s by far my favourite decentralised/cryptocurrency project out there and I think it’ll be HUGE. I’d just like to see no stones left uncovered, why risk it when we can protect the network from these unknowns. Would you mind paying again if your data could be stored for a century? Probably not but at least it sets a quantifiable boundary.


#14

Who cares? If it slows, it will be not 4x but 8x. It is just a small constant multiplier. Also More’s law is about efficiency. But if technology can’t go for more efficient storage it still can produce more disks cheaper. Autonomization and AI will boost production more you can even imagine. 32x price for all time storage with 8 backups is very cheap. Do not forget datacenters are already creating backups with few copies. So the costs are probably 3-4 x lower than 32x. And do not forget to count maintenance, bureaucracy, taxes, salaries, company costs. When you buy storage online, you are paying for all those. Aws s3 charge $0.023 for GB. I can buy 4TB hdd for $150 at local store. 4TB storage for month on Amazon S3 costs $92. So if you store your data on amazon S3 you will pay price of whole 4TB HDD in less than 2 months. 90% of HDD survives more than 3 years. So the point is, HDD is cheap, cloud storage is expensive and it is very close to 32x of HDD price for safenetwork.


#15

Interesting: https://en.wikipedia.org/wiki/Mark_Kryder#Kryder’s_law_projection

I don’t know if rates are holding at 40% per year - I am skeptical … but for the next few years I suspect we will be good. … After that we may slow down a fair bit unless the global economy turns around.

edit - It looks like cloud storage prices aren’t keeping up with moore’s law … although that’s more than just storage and processing speed - but considering that Safe Network will use most aspects of cloud storage servers, it’s possibly a good metric to look at.


#16

There has been talk on other threads about mechanisms to archive data which is infrequently accessed. There have have also been discussions about temporary data storage.

These things have been considered and will no doubt be considered again. However, until the network launches, we can’t quantify how big an issue it is. It is best to try it and see, I think.


#17

It seems to me you can make a self adjusting model that balances the limited space of the network concurrently with the incentives people have.

Do we allow people to pay a lot to store tons of new data? Do we ever discard old
data to make space for the new data?

I think the fundamental problem is that the people who paid to store the old data are not around anymore to engage in a bidding war with those who want to store new data. The price has been paid and that’s it.

Look at Reddit’s and Hacker News’ weighting algorithms. Stories have an exponentially decreasing weight so new stories can get voted to the top. I am not saying we should use that particular model. What I want to say is that they achieve this by taking the log of everything so the log-weight of old stories is CONSTANT. We need something like that in the SAFE network too. The log-price paid upon data submission during the PUT is constant, that’s the constraint. The question is what’s the evolving formula going into the future.

The other thing is that, when you take the log or other function of the weight and price, that means safecoins actually decrease in value over time as more and more are minted. This would make sense given there is more storage and so on.

Basically you need to:

  1. Model the price someone would pay to store new data as a function of the current capacity of the network and its saturation already

  2. Make a reverse function f (analogous to log being inverse of exponent) and then have the constant prices already paid be points on the graph of f

Then you get a proper pricing scheme that reflects what you want forward in time, and no need to guess about future hardware capacity.

Disclaimer: I have a math master’s from NYU


#18

Yes

No

Yes, they have paid the price before. The price they have paid was much much higher than the later price for the same amount of data. Simply said, 1 GB of data was much more expensive few years ago than it is now. So the uploader had to pay lot of premium price at that time. Ha paid for it upfront. Today uploaders will pay higher price to prepay future storage. So basically, every PUT purchase price already include all time storage. So do not claim old users they had not paid yet. They paid for it. Network destroyed coins and those are available to be mined in the future. The later those destroyed coins will be mined, the more storage they can purchase.(This requires assumption, that price of coin does not drop, or at least does not drop faster than price of storage)

No that does not make any sense. Coins are destroyed. Over time number of coins in circulation can even decrease and still available storage can be orders of magnitude higher.

Throwing titles into discussion does not make arguments stronger but shows weakness;)


#19

This is a great summary. Data storage always tends to get cheaper and data requirements always tend to get larger. If this relationship starts to break down, then maybe the network needs to be more discerning. Until that day, this is unlikely to be a problem, IMO.

At worst, the network will be too expensive to store temporal data. However, we have mutable data types for that purpose.


#20

And where in the world do you come up with a guarantee of this assumption for all time?

I didn’t say that people didn’t pay. I said they only paid once. This simply doesn’t take into account the future trade-off between NEW people storing data and OLD people storing data. You are not properly considering the economics and simply “hand-waving” through the cost over time analysis.

There is a cost to KEEP storing something that needs to be analyzed and modeled if the system is going to work. You need to reach an equilibrium that satisfies the vast majority of people or risk the system being blown up.

My disclaimer that I have a math background is information that lets you know where I am coming from, what I care about, how I think. It’s not a weakness. Avoiding economic and mathematical analysis and hoping it will all just work out is weakness.