Tragedy of the commons

You just did it right there! You think this assumption will be valid, and I am asking how can you guarantee that? I didn’t say you have guaranteed it. I am asking how can you?

Ans if not, what about the times when this assumption is NOT valid? That’s the whole point!

1 Like

But it’s more than just about you. I can bet on some amazing coincidence too, but if others come to rely on a tool, they might want a bit more assurance than a random person betting :slight_smile:

Bitcoin and Ethereum forks were painful. We have tons of time to get it right. Having more information via mathematical analysis can only help. Why avoid it?

I repeat: simply assuming that “the price of coin does not drop, or at least does not drop faster than the price of storage” is irresponsible. Of course coin prices drop. They go up and down. That’s what markets do. Someone can pump and dump a coin. There are many things that can happen.

Saying “this problem is not a problem in those times when the assumption is valid”, well what about all the times when it’s not valid? Maybe that is the majority of the time. Will that make the network completely uneconomical to use and therefore people will use something else?

The cost of storing something for a certain amount of time is also a function of time, not just of hard drive space. The longer you store it, the more you have opportunity cost (when you could have been storing something else). This needs to be modeled. Ignoring the time factor can only work if you assume that there is a geometric progression going to zero so fast that the long tail is negligible. That’s the assumption you make, and it’s a real stretch.

1 Like

I definitely support your reasoning about modeling and not making assumptions.
Go ahead everyone and think, model, turn stones find new angles! That is what we need.
When anything substantial can be demonstrated, conclusions drawn, MaidSafe will surely consider it.
When we believe there is a need for something, we should find all necessary arguments/facts to support it. Better head straight into that than getting stuck too early seeking support from others, as there will always be nay sayers who have more time and energy than you do, to argument against you. They might be right of course, but if you believe just go for it.

2 Likes

I don’t think any of us want a fragile network - so we want it to have the ability to adapt to whichever future comes to pass. There is a fair bit of defense now toward how the existing model of a future SafeNet might work … and some offense of how it might not …

So IMO, the question is how can we improve the design to cover all bases … let’s not accept a false dichotomy … there may be a nice creative solution out there that could move this discussion forward.

1 Like

Actually Moore’s law applies to transistor density, not drive storage.

Drive storage for the last30+ years has been 10x every 5 years.

This next 12 months is seeing a massive jump in storage as SSD storage has a 40x increase being implemented with some of it already hitting the markets. Moore’s law more applies to SSD storage, BUT BUT it doesn’t directly since the technology and implementation of charge wells and flash implementation of it does not follow the pattern of that which Moore’s law was modeling.

SSD is using 3d technology so there is a 3d component and Moore’s law was for 2D which up to recently was what all transistor based chip technology was based on.

So the SSD industry has come out of its infancy and now is seriously investigating ways to implement denser storage which is why the 40x increase is coming to the market over the next 12 months. What happens after that for SSD is going to be very interesting and expect the SSD drives to outstrip the magnetic drives in sizing and dominate the storage market.

There are a lot of other storage mediums being investigated at this time and they will outstrip SSD and magnetic storage when they become viable storage mediums in the next decade. For example DNA storage or optical cube technology with 360TB one time storage having been demonstrated.

As a whole we are still at the infancy of the technology boom and in 50 years we will wonder how anyone could survive with a few TBytes of memory let alone storage and using processors that have discrete cores. They won’t measure storage in bytes as the number will be just too large for simple description and computing won’t be measured in GHz or ‘cores’ but newer terms will be made. It will be like GCores (computing neurons) sharing any computational load. No such thing as discrete instructions etc.

The bandwidth has been increasing at an even fast rate than transistor density or storage density. Its a 10^11 in 40 years and not slowing. Then on top of that density of channels in a given ‘cable’ is increasing and on top of that the repeater distance is increasing. It is exploding compared to computing power or storage increases.

All good news for SAFE

16 Likes

So you understand now that you do not receive rewards for storing the data, but for retrieving data when requested?

The issue of being lumbered with the old data is not as bad as it may seem at first.

  • Yes if your vault is never turned off and never churns then it will have a fair share of older chunks that are accessed less than the newer chunks. But your vault is receiving new chunks and these will be accessed at a faster rate, so its not a total loss.
  • When your vault is restarted (power cycle/churn) from scratch then it will be getting new chunks and earn when they are requested successfully.
  • There is plans to have archive nodes that will accumulate the older nodes that are not being accessed much allowing normal nodes to be storing more accessed chunks (on average)
  • The old chunks will end up being relatively evenly spread across all the vaults as the nodes restart at various times. This means that practically all farmers will be bearing an even load of older lesser requested chunks.
  • As has been said before - The rate of storage size increases will mean that over the years the earlier stored chunks that are less requested will be taking only a small portion compared to the newer chunks year on year.
  • As others have said - there will be a natural tendency for the PUT cost to move towards paying for persistent data storage because of the algo to adjust price according to some measure of spare storage. In other words farmers will only farm if they are being adequately compensated and will stop if they are not. So this has the tendency to mean that in general farmers will only farm if they can be compensated for persistently storing the data.
3 Likes

Unless things have changed, a vault also receive the old chunks. There is a preliminary phase where it is updated with all the chunks corresponding to its new xor address. These chunks are provided by its new neighbors.

This sort of mentality could be really problematic for archivists and historians. Just because something is old doesn’t mean it isn’t of value. And since no one knows what any piece of data is then one can’t judge whether anything should be deleted or not. What happens if you archive data and then someone 20 or 40 or 60 years later finds it in a database somewhere and wants to access it? I’ve made use of old websites plenty of times.

4 Likes

Yes correct. But if your vault had been full and not receiving newer chunks for a month or months then a restart (wiped) would mean that you get some of the old chunks and some of the new chunks. Better than no new chunks. And you also likely will have allocated more space to your vault before restart. (obviously if you had allocated the max previously of what you have then you cannot increase it)

So very true. And this will start becoming essential as the first generations with digital data start passing away and some of their data becomes important or even essential for the children to access. There are some things parents do not disclose to their children until those children are older or in some cases till the parents are in the later stages of their life. And when this data is only digital (increasingly happening) then you cannot just delete data that hasn’t been accessed for a couple of decades. Hopefully archive nodes are developed and these chunks will eventually live in the archive nodes. And archive nodes while more expensive because of the required storage space, those people running them will earn plenty because of the scale of their farm. They could be using extremely large write once storage which isn’t suitable for normal vaults.

1 Like

It seems the purpose of humanity will unexpectedly turn out to have been preserving cat pics forever.

:frowning_face: :dog2:

5 Likes

Sorry, I don’t understand what you mean. All I know is that the set of chunks a node is responsible for is only related to its XOR address and doesn’t depend on the history of the node (start, restarts and relocations).

It might possess more chunks because it has been relocated or as the network grows its density has increased and so the area it manages has shrunk. But it will be never asked to provide the chunks for which it is not in the 8th nearest nodes anymore, and so it won’t have any opportunities to get rewards for them.

If your node has filled up then it cannot receive any more chunks can it. So the chunks in it are just getting older and older and not being asked for as much because of the principle that as data gets older it is accessed less and less.

So if you restart it, then it is randomly assigned to a section and starts filling up again according to what you said. So it will end up getting both old chunks and newer chunks and have a greater chance of serving up data. If that section has 24 adults+elders and each chunk has a minimum of 4 copies (maybe it will be put back to 8) then reason has it that not every vault in a section will hold the same chunks. Thus a new vault in that section will be getting its fair share of new chunks as they come in to be stored.

Don’t forget the cat vids either. :cat:

1 Like

I still don’t understand the point of making the user decide if they want to restart their node or not. Doesn’t it make more sense to just have the data continuously swirl around? If a user has been good to the network, and aged to the point of becoming and elder, why allow for the possibility that they will get bored with stale data? Just my opinion, but it seems like giving elder’s or adults of a certain age the ability to “swirl” rather than “churn” would be beneficial for the network.

So what is the definition of “swirl” you ask? I would say you could define it as an “orderly churn”. An elder makes a swirl request, then their data is checked for proper replication. If the elder stays online until the network confirms that all the data they were storing is super safe, then the network essentially “churns” them behind the scenes, but they don’t lose a drastic amount of nodal age. Maybe just N=N-1 rather than N=N/2?

1 Like

The ‘tragedy of the commons’ is idiocy. What about the tragedy of enclosure by useless unnecessary toll road barron rent seeker wealthy tax leaches seeking to privately directly tax every one else?

Its an argument made by the rich or their unwitting prostitutes in a world where the pie slicer of “work” evaporated with the solution of the evonomic problem 50 years ago. Post work nothing, absolutely nothing supports the status of the rich, so they seek James Buchannan’s “world of slaves,” through inducing artificial scarcity for instance through enclosure with petrol and media, and peddling a phoney economy based on artificial manufactured scarcity. And of course people use the vote to vote themselves a share of the wealth they produced and which belongs to them (that is the pointvof voting as a commons) because to do otherwise is to become property and suffer a fate worse than murder or as a group mass murder or genocide.

2 Likes

OK, I understand now and I understand why I didn’t understand. In my mind a vault that cannot store all the chunks it is responsible for should be expelled from the network because it cannot respond to some requests.

I remember that initially some messages were planned to be periodically exchanged between nodes to check that chunks are really stored in a vault (short messages as the data itself doesn’t need to be transferred, only the signature of a random small portion of a random chunk needs to be transferred). Something like a Proof of Storage that doesn’t cost too much in bandwidth and in CPU.

This hasn’t been implemented yet, but I cannot imagine that the network will allow vaults that don’t store all the chunks they must store. The network would be too unreliable.

1 Like

It isn’t a binary option. You can take ownership of something without depriving others of it, assuming you aren’t monopolizing it.

For example, taking ownership of some data doesn’t prevent others creating and storing more of it. It just means that you are more likely to look after it, should it have value to yourself or others.

However, in the context of this thread, people must pay to store data forever. This will ultimately be priced in to storage costs. So, it is less like polluting a common and more like polluting your own property (that you have paid for). If people want to do this, then they probably have rocks in their heads.

1 Like

Suspect we will end up storing more then a library of congress in a single electron and then prove that the storage capacity of even a single electron is infinite in the digital physics sense as if the electron itself were just an interface to an endless repository such that the cost of storage becomes truely nominal.

Regardless, storage looks like a basis for kickstarting the network and if there is any enclosure of a commons here it so far appears (according to those able to evaluate it and the apparent intent of its creator David Irvine) gentle and self attenuating such that it would facilitate access rather then restrict access- inclusive like an open house vice exclusive like a closed cell.

I have a feeling what David and crew are doing here with the math is giving form to something that aleady exists such that even if we move on to something like paradoxical non local communication, a lot of what they are doing here will hold up. Way out there we may find not 4 copies or 8 copies etc. of data but find every expressed particle in the universe contains an accessible explicit copy of the network’s holographically implict data such that SAFE and efforts like it were early attempts to build bridges to an implied pre-existing network. If the universe itself is as Seth Lloyde says is a q computer (in some sense) then all of its pieces are already networked together in an underlying whole in some sense. Which means that boobs and single malt whiskey and xor space are some how all all related. Already having seen the bird this airplane is inevitable.

1 Like

I don’t know what you’re smoking, but could I please have some? Peace out brother.

2 Likes

This is my understanding too.

A few interesting deductions from this

  • new vaults (due to join or relocate) will not instantly have all chunks, so there’s some inherent lag in network churn. This means the responsiveness of the network to sudden change depends a lot on the number of chunks per vault. Too many chunks and the network may not be able to manage the churn rate. The network probably needs to actively encourage a target for vault sizes for the sake of healthy churn rates.

  • “the network will not allow” is music to the ears of large operators. The network is handing them control of a ‘stop sign’. Large operators will be pushing for larger vaults via any means possible so that new players are literally ‘not allowed’ to take part or cut into their share of safecoin rewards. The not-allow-incapable-vaults mechanism is essential but it creates a worrying incentive to big operators. I think someone will end up developing software for ‘pooling’ lots of small vaults to appear as a single large vault to the network. Could be an interesting out-of-band dynamic behind it which could ultimately end up undermining the network rules.

2 Likes

Who was making that as a point? It was more a technical concept being thought about with the fictional user being an agent to enable the discussion.

Normally natural influences would cause these things to happen for the user who doesn’t know the whats and hows. Like turning off their computer every so often, or their windows machine installing updates and forcing a restart without their permission (non tech people remember), kids restarting the PC, or any of 100 others things that would cause a computer not to run 24/7/365

Ummm why would the section ask a vault which is full to store more chunks? A full vault will not be asked to store more chunks and this was shown in that test where the varying sized home vaults all became full due to spamming, remember.

You cannot expect every vault to be at the maximum size and all vaults become full at the same time. So the network knows not to ask a full vault to store more chunks and then penalise it. The network is more inclusive than that, and vault sizing will vary a lot from say vaults with less than 100GB to vaults with more than 2TB (yes, i know its better to have multiple vaults than have ultra huge vaults.)

That concept for SAFE is to ensure the vault that reports having say 200GB actually has 200GB, not to ensure all vaults are of consistent size, which is what your thought line boils down to in the end. The reason is that if half the vaults had 100GB and the other half had 400GB then the 100GB vaults would be rejected for not storing enough chunks when every vault had 50GB of their vault used (remember vaults are considered full for new chunks at 1/2 their size to allow enough space to accommodate for the extra chunks due to churning)

But previous tests showed that full vaults are not rejected or expected to store chunks and thus not penalised for it. Remember the spammed test that showed that vaults can fill up and the network will not collapse

3 Likes