Tragedy of the commons


#21

The cost to KEEP storing something is pretty negligible, though. Capacity will keep increasing, driving down cost per byte forever.

For example, someone paid a premium for a TB of storage, which is a decent amount of space right now. The standard storage capacity will likely be an etabyte in 30 years, which makes storing a TB not really a big deal. After all, people are walking around with 100x-1000x that amount of free space in their pocket, on their phone.


#22

That’s a very handwavy analysis. Suppose the cost goes down by a smaller and smaller exponent. Suppose it even goes down by a constant exponent forever (which is physically impossible) like N = 0.9

Then guess what, the INTEGRAL / SUM of the cost function still adds up to many dozens or hundreds of times the original price! You use the geometric progression formula as a lower bound.


#23

That totally depends on what the original price is, relative to the storage utilized. Since you don’t know what the relative price of space is, you can’t make that assumption. If the average free space on a hard drive today is 500G, and the average storage people want/need is 50G, we essentially have infinite space, which will only keep growing. That’s not even counting those that will choose to farm with dedicated hardware and/or datacenters.


#24

Did I come with any such guarantee? Nope. So why do you claim I did? I just wrote that this problem is not a problem in those times when this assumption is valid. However I do think, this assumption will be valid and I already explained that before. Yes, we can discuss how long will Moor’s law apply, if civilization will be still able to handle this rate in 50 years, or in 1000 years. Well, I do not believe safenetwork would be here in 100 years. Technology changes so fast, that discussing any prediction longer than 20-30 years is time loss.

Actually you are not properly considering economics. Everytime someone purchase storage, he is not paying to farmers. He first has to farm or buy coin, and than coin is destroyed. That coin is not farmed immediately. So he is creating a pool for future farming. Farming is done on all data, new or old. Farmer earns coins from holding data, not from new data written. This is the main misunderstanding in your assumptions. New data, old data, it does not matter. Farmer farms from all data. The more data farmer has stored(relatively to others), the more can he earn. Network will try to keep some equilibrium by lowering or increasing farming rewards based on ratio of free/used space. So this has been already analyzed. Storage cost will be independent of SafeCoin absolute price, it will depend only on speed of price change in short periods after price swings. Price of PUTs is automatically adjusted by network, farmers and is long term dependent mostly on price of storage. And price of storage is the easiest part to predict from all those unknowns. Sure model whatever you want. In my opinion you want to model something worse than has been modeled already.

Well again, you are using assumption I claimed something that I did not. I did not tell your Math degree is weakness. I told the fact that you are using it in discussion is weakness. If you want to say you have math background than say exactly that.

I did not say that either. Actually I am the only one who started to discuss with you with some analysis.


#25

You just did it right there! You think this assumption will be valid, and I am asking how can you guarantee that? I didn’t say you have guaranteed it. I am asking how can you?

Ans if not, what about the times when this assumption is NOT valid? That’s the whole point!


#26

Lol, why would I guarantee anything? If I think something and I bet on it, than the future will tell if I am right or wrong. Yes, anything can happen. This is opensource project. Bitcoin was forked many times to fix bugs. So did Ethereum. What prevents people to fix any hypothetical problem in the future? If there will be such problem in X years later, than it is just a question of releasing updated network with updated algorithm if people agree on it. Right now, with current state of technology and network design I don’t see any reason why to be afraid. If our children will run out of cheap storage solutions they can change it. I expect network to evolve and introduce many new layers and modifications in next years after the release. Nobody knows how will this network evolve. So trying to solve hypothetical future problems with additional complexity now does not make sense to me. The whole point of network is to store data forever. This was the main idea. So once you change it and require repeating payments you prevent the network to do one of the main purposes it was created for. There has to be really serious reason to do that.


#27

But it’s more than just about you. I can bet on some amazing coincidence too, but if others come to rely on a tool, they might want a bit more assurance than a random person betting :slight_smile:

Bitcoin and Ethereum forks were painful. We have tons of time to get it right. Having more information via mathematical analysis can only help. Why avoid it?

I repeat: simply assuming that “the price of coin does not drop, or at least does not drop faster than the price of storage” is irresponsible. Of course coin prices drop. They go up and down. That’s what markets do. Someone can pump and dump a coin. There are many things that can happen.

Saying “this problem is not a problem in those times when the assumption is valid”, well what about all the times when it’s not valid? Maybe that is the majority of the time. Will that make the network completely uneconomical to use and therefore people will use something else?

The cost of storing something for a certain amount of time is also a function of time, not just of hard drive space. The longer you store it, the more you have opportunity cost (when you could have been storing something else). This needs to be modeled. Ignoring the time factor can only work if you assume that there is a geometric progression going to zero so fast that the long tail is negligible. That’s the assumption you make, and it’s a real stretch.


#28

I definitely support your reasoning about modeling and not making assumptions.
Go ahead everyone and think, model, turn stones find new angles! That is what we need.
When anything substantial can be demonstrated, conclusions drawn, MaidSafe will surely consider it.
When we believe there is a need for something, we should find all necessary arguments/facts to support it. Better head straight into that than getting stuck too early seeking support from others, as there will always be nay sayers who have more time and energy than you do, to argument against you. They might be right of course, but if you believe just go for it.


#29

Well, you told you have good math background. So you should know something about logic expressions. The whole discussion started with my claim, that if price of coin does not drop faster than price of storage than there is no problem to discuss. Such sentence does not explain anything about the situation if that condition is not valid. I did not tell it could not happen, I did not tell this condition will hold and I did not do any analysis if it does not hold. What I told was, I believe it will hold long term. That’s it. But you instead of discussing validity of that sentence you immediately turned discussion to the point where I do claim something that I did not.

Ok, if you agree that this sentence is valid, and there is not a problem if coin price does not drop faster than storage price than we can shift discussion to other case. First of all, that sentence I wrote say nothing about this case. If price of coin drops faster than storage price, than safenetwork has already mechanism for that.

I propose to analyze 2 cases,

  1. short period - days/weeks/months
  2. long period- years

Short period - this can happen and will happen often, but it is not where my original sentence applied. We were discussing long term price problem in years. Short term fluctuations are solved by coin buffer. Simply said, there is around 4 billions coins Max. (MAX_INT) to be mined. Let say at some time there is X coins in circulation. If price of coin drops fast, than people are able to buy cheap coins and convert them to PUT operation and fill the free disk space. Network detects that there is less free space and increase farming rewards. So as a result miners mine more coins. More coins = more $$ to buy additional free space. So, network is borrowing coins from the pool and increasing total circulation of coins. Miners are then selling more coins at cheaper price to purchase additional storage. Buyers are not able to buy cheaper storage now, because network increased cost of PUTs since farming rate was increased. Equilibrium is created again and previous market price fluctuation is accounted in network PUT pricing. At this point there is more coins at circulation than it was before and it becomes harder to for miners to mine free coins, because the closer actual circulation is to Max coins, the harder it is mine new coins. So there starts the pressure for increasing coin price, since miners are mining coins less often. As a result coin circulation should slowly drop.

I am not sure if I can explain all this mechanism in single paragraph, it is too much information. But basically after any sharp price movement, network will adjust its behavior and in some short term period some equilibrium will be met again. Absolute price of coins is irrelevant, what is important is speed of change. The faster price change the more disruptive it is and the more has to be borrowed or added from/to the free buffer. But it will always balance in some short period again.

Long period - this is where my original sentence was claimed. Well you can analyze here what ever you want. If coin price can’t hold its value long term, than network is dead. It means there are not any services using it, there is not any future plan for the grow and network has nothing to offer anymore. If price of coin drops faster than storage price drops it means it drops value faster than 1/2 every 2 years. I can’t imagine network to be functional in this scenario.

To be crystal clear, I am not telling you should not model what ever you want. You are welcome to do as many models as you can. But in your first post you detected problem and created complicated solution. First, I do not agree there is such a problem, but that was not a reason I started to argue with you. The reason is your proposed solution. If you want to model something you first have to use correct inputs.


#30

I don’t think any of us want a fragile network - so we want it to have the ability to adapt to whichever future comes to pass. There is a fair bit of defense now toward how the existing model of a future SafeNet might work … and some offense of how it might not …

So IMO, the question is how can we improve the design to cover all bases … let’s not accept a false dichotomy … there may be a nice creative solution out there that could move this discussion forward.


#31

Actually Moore’s law applies to transistor density, not drive storage.

Drive storage for the last30+ years has been 10x every 5 years.

This next 12 months is seeing a massive jump in storage as SSD storage has a 40x increase being implemented with some of it already hitting the markets. Moore’s law more applies to SSD storage, BUT BUT it doesn’t directly since the technology and implementation of charge wells and flash implementation of it does not follow the pattern of that which Moore’s law was modeling.

SSD is using 3d technology so there is a 3d component and Moore’s law was for 2D which up to recently was what all transistor based chip technology was based on.

So the SSD industry has come out of its infancy and now is seriously investigating ways to implement denser storage which is why the 40x increase is coming to the market over the next 12 months. What happens after that for SSD is going to be very interesting and expect the SSD drives to outstrip the magnetic drives in sizing and dominate the storage market.

There are a lot of other storage mediums being investigated at this time and they will outstrip SSD and magnetic storage when they become viable storage mediums in the next decade. For example DNA storage or optical cube technology with 360TB one time storage having been demonstrated.

As a whole we are still at the infancy of the technology boom and in 50 years we will wonder how anyone could survive with a few TBytes of memory let alone storage and using processors that have discrete cores. They won’t measure storage in bytes as the number will be just too large for simple description and computing won’t be measured in GHz or ‘cores’ but newer terms will be made. It will be like GCores (computing neurons) sharing any computational load. No such thing as discrete instructions etc.

The bandwidth has been increasing at an even fast rate than transistor density or storage density. Its a 10^11 in 40 years and not slowing. Then on top of that density of channels in a given ‘cable’ is increasing and on top of that the repeater distance is increasing. It is exploding compared to computing power or storage increases.

All good news for SAFE


#32

So you understand now that you do not receive rewards for storing the data, but for retrieving data when requested?

The issue of being lumbered with the old data is not as bad as it may seem at first.

  • Yes if your vault is never turned off and never churns then it will have a fair share of older chunks that are accessed less than the newer chunks. But your vault is receiving new chunks and these will be accessed at a faster rate, so its not a total loss.
  • When your vault is restarted (power cycle/churn) from scratch then it will be getting new chunks and earn when they are requested successfully.
  • There is plans to have archive nodes that will accumulate the older nodes that are not being accessed much allowing normal nodes to be storing more accessed chunks (on average)
  • The old chunks will end up being relatively evenly spread across all the vaults as the nodes restart at various times. This means that practically all farmers will be bearing an even load of older lesser requested chunks.
  • As has been said before - The rate of storage size increases will mean that over the years the earlier stored chunks that are less requested will be taking only a small portion compared to the newer chunks year on year.
  • As others have said - there will be a natural tendency for the PUT cost to move towards paying for persistent data storage because of the algo to adjust price according to some measure of spare storage. In other words farmers will only farm if they are being adequately compensated and will stop if they are not. So this has the tendency to mean that in general farmers will only farm if they can be compensated for persistently storing the data.

#33

Unless things have changed, a vault also receive the old chunks. There is a preliminary phase where it is updated with all the chunks corresponding to its new xor address. These chunks are provided by its new neighbors.


#34

This sort of mentality could be really problematic for archivists and historians. Just because something is old doesn’t mean it isn’t of value. And since no one knows what any piece of data is then one can’t judge whether anything should be deleted or not. What happens if you archive data and then someone 20 or 40 or 60 years later finds it in a database somewhere and wants to access it? I’ve made use of old websites plenty of times.


#35

Yes correct. But if your vault had been full and not receiving newer chunks for a month or months then a restart (wiped) would mean that you get some of the old chunks and some of the new chunks. Better than no new chunks. And you also likely will have allocated more space to your vault before restart. (obviously if you had allocated the max previously of what you have then you cannot increase it)

So very true. And this will start becoming essential as the first generations with digital data start passing away and some of their data becomes important or even essential for the children to access. There are some things parents do not disclose to their children until those children are older or in some cases till the parents are in the later stages of their life. And when this data is only digital (increasingly happening) then you cannot just delete data that hasn’t been accessed for a couple of decades. Hopefully archive nodes are developed and these chunks will eventually live in the archive nodes. And archive nodes while more expensive because of the required storage space, those people running them will earn plenty because of the scale of their farm. They could be using extremely large write once storage which isn’t suitable for normal vaults.


#36

It seems the purpose of humanity will unexpectedly turn out to have been preserving cat pics forever.

:frowning_face: :dog2:


#37

Sorry, I don’t understand what you mean. All I know is that the set of chunks a node is responsible for is only related to its XOR address and doesn’t depend on the history of the node (start, restarts and relocations).

It might possess more chunks because it has been relocated or as the network grows its density has increased and so the area it manages has shrunk. But it will be never asked to provide the chunks for which it is not in the 8th nearest nodes anymore, and so it won’t have any opportunities to get rewards for them.


#38

If your node has filled up then it cannot receive any more chunks can it. So the chunks in it are just getting older and older and not being asked for as much because of the principle that as data gets older it is accessed less and less.

So if you restart it, then it is randomly assigned to a section and starts filling up again according to what you said. So it will end up getting both old chunks and newer chunks and have a greater chance of serving up data. If that section has 24 adults+elders and each chunk has a minimum of 4 copies (maybe it will be put back to 8) then reason has it that not every vault in a section will hold the same chunks. Thus a new vault in that section will be getting its fair share of new chunks as they come in to be stored.

Don’t forget the cat vids either. :cat:


#39

I still don’t understand the point of making the user decide if they want to restart their node or not. Doesn’t it make more sense to just have the data continuously swirl around? If a user has been good to the network, and aged to the point of becoming and elder, why allow for the possibility that they will get bored with stale data? Just my opinion, but it seems like giving elder’s or adults of a certain age the ability to “swirl” rather than “churn” would be beneficial for the network.

So what is the definition of “swirl” you ask? I would say you could define it as an “orderly churn”. An elder makes a swirl request, then their data is checked for proper replication. If the elder stays online until the network confirms that all the data they were storing is super safe, then the network essentially “churns” them behind the scenes, but they don’t lose a drastic amount of nodal age. Maybe just N=N-1 rather than N=N/2?


#40

The ‘tragedy of the commons’ is idiocy. What about the tragedy of enclosure by useless unnecessary toll road barron rent seeker wealthy tax leaches seeking to privately directly tax every one else?

Its an argument made by the rich or their unwitting prostitutes in a world where the pie slicer of “work” evaporated with the solution of the evonomic problem 50 years ago. Post work nothing, absolutely nothing supports the status of the rich, so they seek James Buchannan’s “world of slaves,” through inducing artificial scarcity for instance through enclosure with petrol and media, and peddling a phoney economy based on artificial manufactured scarcity. And of course people use the vote to vote themselves a share of the wealth they produced and which belongs to them (that is the pointvof voting as a commons) because to do otherwise is to become property and suffer a fate worse than murder or as a group mass murder or genocide.