Tragedy of the commons


#41

OK, I understand now and I understand why I didn’t understand. In my mind a vault that cannot store all the chunks it is responsible for should be expelled from the network because it cannot respond to some requests.

I remember that initially some messages were planned to be periodically exchanged between nodes to check that chunks are really stored in a vault (short messages as the data itself doesn’t need to be transferred, only the signature of a random small portion of a random chunk needs to be transferred). Something like a Proof of Storage that doesn’t cost too much in bandwidth and in CPU.

This hasn’t been implemented yet, but I cannot imagine that the network will allow vaults that don’t store all the chunks they must store. The network would be too unreliable.


#42

It isn’t a binary option. You can take ownership of something without depriving others of it, assuming you aren’t monopolizing it.

For example, taking ownership of some data doesn’t prevent others creating and storing more of it. It just means that you are more likely to look after it, should it have value to yourself or others.

However, in the context of this thread, people must pay to store data forever. This will ultimately be priced in to storage costs. So, it is less like polluting a common and more like polluting your own property (that you have paid for). If people want to do this, then they probably have rocks in their heads.


#43

Suspect we will end up storing more then a library of congress in a single electron and then prove that the storage capacity of even a single electron is infinite in the digital physics sense as if the electron itself were just an interface to an endless repository such that the cost of storage becomes truely nominal.

Regardless, storage looks like a basis for kickstarting the network and if there is any enclosure of a commons here it so far appears (according to those able to evaluate it and the apparent intent of its creator David Irvine) gentle and self attenuating such that it would facilitate access rather then restrict access- inclusive like an open house vice exclusive like a closed cell.

I have a feeling what David and crew are doing here with the math is giving form to something that aleady exists such that even if we move on to something like paradoxical non local communication, a lot of what they are doing here will hold up. Way out there we may find not 4 copies or 8 copies etc. of data but find every expressed particle in the universe contains an accessible explicit copy of the network’s holographically implict data such that SAFE and efforts like it were early attempts to build bridges to an implied pre-existing network. If the universe itself is as Seth Lloyde says is a q computer (in some sense) then all of its pieces are already networked together in an underlying whole in some sense. Which means that boobs and single malt whiskey and xor space are some how all all related. Already having seen the bird this airplane is inevitable.


#44

I don’t know what you’re smoking, but could I please have some? Peace out brother.


#45

This is my understanding too.

A few interesting deductions from this

  • new vaults (due to join or relocate) will not instantly have all chunks, so there’s some inherent lag in network churn. This means the responsiveness of the network to sudden change depends a lot on the number of chunks per vault. Too many chunks and the network may not be able to manage the churn rate. The network probably needs to actively encourage a target for vault sizes for the sake of healthy churn rates.

  • “the network will not allow” is music to the ears of large operators. The network is handing them control of a ‘stop sign’. Large operators will be pushing for larger vaults via any means possible so that new players are literally ‘not allowed’ to take part or cut into their share of safecoin rewards. The not-allow-incapable-vaults mechanism is essential but it creates a worrying incentive to big operators. I think someone will end up developing software for ‘pooling’ lots of small vaults to appear as a single large vault to the network. Could be an interesting out-of-band dynamic behind it which could ultimately end up undermining the network rules.


#46

Who was making that as a point? It was more a technical concept being thought about with the fictional user being an agent to enable the discussion.

Normally natural influences would cause these things to happen for the user who doesn’t know the whats and hows. Like turning off their computer every so often, or their windows machine installing updates and forcing a restart without their permission (non tech people remember), kids restarting the PC, or any of 100 others things that would cause a computer not to run 24/7/365

Ummm why would the section ask a vault which is full to store more chunks? A full vault will not be asked to store more chunks and this was shown in that test where the varying sized home vaults all became full due to spamming, remember.

You cannot expect every vault to be at the maximum size and all vaults become full at the same time. So the network knows not to ask a full vault to store more chunks and then penalise it. The network is more inclusive than that, and vault sizing will vary a lot from say vaults with less than 100GB to vaults with more than 2TB (yes, i know its better to have multiple vaults than have ultra huge vaults.)

That concept for SAFE is to ensure the vault that reports having say 200GB actually has 200GB, not to ensure all vaults are of consistent size, which is what your thought line boils down to in the end. The reason is that if half the vaults had 100GB and the other half had 400GB then the 100GB vaults would be rejected for not storing enough chunks when every vault had 50GB of their vault used (remember vaults are considered full for new chunks at 1/2 their size to allow enough space to accommodate for the extra chunks due to churning)

But previous tests showed that full vaults are not rejected or expected to store chunks and thus not penalised for it. Remember the spammed test that showed that vaults can fill up and the network will not collapse


#47

My question was more aimed at the general topic that has spanned multiple threads related to the concept of ‘churn penalty’, and the hypothetical that at some point users who own vaults with lots of stale data might try their luck at forcing a churn in order to get new data that could bring with it more get rewards. (If I’m not mistaken, a churn event will cut a vault’s age in half). This seems like a fine demotion for people trying to mess with the network, but a bit too harsh for benevolent elders, and not-optimal for network health.

I do see the value in having a harsh penalty for a churn event in order to protect the network, just trying to think of a good balance to promote vault participation long term. From a different perspective, I think this issue could also be addressed by staying with a harsh churn penalty while also having farming reward rates get a bonus with age. That way, users who manage to keep a vault full of stale data running 24/7 for 4 years get a nice payday to make it all worthwhile in the slim chance that one of those stale/cold chunks is requested.


#48

This makes complete sense. Civilization really cannot survive otherwise. Cat pictures are the hallmark of civilization. :rofl:


#49

this has been like that for thousands years, or at least go in Egypt… you’re surely going to think that civilization really cannot survive otherwise. cat pics were mainstream for years and years


#50

Though I said it jokingly, there is a lot of truth to the concept of using cat pix as an earmark of civilization. Aside from the cuteness (which correlates to a lighter side, necessary to living), there is also the aspects of adventurousness, insouciance and humor. Most of all, cats embody an attitude of individualism, without which civilizations become tyrannical.

Having cats, in all their manifestations, as a popular meme is a mark of advancing civilization. Look how far we’ve come since Egypt.

To tie this back to the OP: Cats honor the commons, not because they bow always to an external authority, but because they recognize their own authority and honor it in others, whether friend or foe.


#51

Even before Egypt, cats have had a place in human existence. There is a theory that cavepeople would have not thrived as they did without cats. Wish I could find that theory that is presented as tongue in cheek stuff, but has a lot of potential truth in it.

And if nothing else cats teach young children the art of negotiation with entities that believe humans are their slaves.


#52

Sorry for the delay, but I wanted to make tests in a local network to confirm what I said and I didn’t have time until now.

This won’t happen: Vaults have approximately all the same size because a vault manages a chunk of data if and only if it belongs to the group of the 8 vaults nearest to the address of the chunk.

Of course, there are variations because data density in XOR space is not uniform, but bigger vaults cannot manage more data than they are responsible for and too small ones will be expelled if they cannot manage all the data they are responsible for.

This is what current implementation actually does with target size = total size * 8 / number of vaults. And this is more than an encouragement because a vault with a smaller size will not stay active for long, and a vault with a bigger size doesn’t earn more rewards (it only gains the ability to remain active if the total size grows and/or the number of vaults decreases).

No risk of that, for the same reason (bigger vaults don’t earn more). Large operators can just create many small vaults if they want, but they won’t prevent small operators from creating their own small vaults. It is even expected that there will be a majority of small operators because home users vaults will cost almost nothing with hardware they already own and bandwidth they already pay for.

Note: Tests I have done confirm only one half of the story: vaults don’t store more data than they need to. I don’t know if the other half has been implemented yet (penalties for not providing data they are responsible for). But I am sure it will, otherwise the network would be unreliable


#53

As you say, vault size should be fairly equal across the network at any point in time. But it should vary over time depending on the rate that vaults are joining or leaving the network, as well as upload rates.

It’ll be really interesting to see where the balance is for vault size in the real world. There are a lot of factors that affect it.

Just to clarify, by target size I mean more like a fixed hardcoded target size enforced by the network itself (like the block time target of 10 minutes in bitcoin). So the network may set a fixed target size of 10 GB; supplying more than that or less than that results in less reward to those vaults. Operators are inclined to stay close to that size by bringing on more capacity or removing capacity as needed depending on upload rate. This would probably make the question of ‘should I start a vault’ much more predictable than with a floating capacity. But it has some obvious downsides too. I’m not advocating fixed vault sizes, just thinking of it as a possible way to increase participation by eliminating bandwidth-heavy start conditions if large vaults become the norm.


#54

Just like the fixed 1MB chunk size (not sure if this is mega or mebi bytes), wouldn’t it be most straightforward/logical to have a fixed vault size? Intuitively it seems that this would allow for some performance optimization.

For example consider 1 chunk as 1000000 bytes, and by analogy 1 vault would be 1000000 chunks.

But what about Mobile the masses scream!? Nothing says there couldn’t be farming pools for mobile users, might even be preferable if the farming pools are setup in a way to benefit mobile users that would otherwise churn. Mobile will eventually have a few TB of SSD anyways…

Sorry to slide more off topic…


#55

Dogs treat humans as their masters, Cats treat people as their slaves, Pigs treat humans as their equals - we shouldn’t be eating them . .


#56

The tragedy of the commons is a term used in to describe a situation in a shared-resource system where individual users acting independently according to their own self-interest behave contrary to the common good of all users by depleting or spoiling that resource through their collective action.
WIkipedia

There are a few fallacies in this term when using it as an argument/problem for SafeNet and its reliability. @Warren touches on this further up in this thread. The term has a very narrow scope taking very destructive intentions into consideration assuming all factors are known. Lets entertain the discussion anyway. It could possibly apply to Blockchain technology as all data is shared (shared resource) and propagation delay in regards to scalability will be affected and may even halt the network at some point. But lets focus on Maidsafe.

Resources in Safenet are not shared with everyone, everyone has equal limited access, only whats needed. I guess it will be clearer when the paper on disjoint groups are released (sharding). I personally look forward to understand better myself. Anywho, a piece of data that Alice has stored with Bob doesnt need to be shared with Eve, it will be stored at enough places to be retreivable with privacy and randomness so it cannot be targeted in any way. This also removes the attack vector of shutting down/overload disjoint groups. I like Davids comparison to ants where no single entity needs to know everything, yet work in complete unison for the better good of the network. Many Alices need to store a great amount of data to even have an effect but I will get to that further down.

As @Antifragile touches upon further up this thread, every time data is stored someone is rewarded, this reward will have certain value. Its important to keep in mind that the network is autonomous, not a human controlled system (hence the requirement of certain level of completion before Safenet can launch) where individual users act independently according to their own self-interest and contrary to the common good. In Safenet, every individuals self interest is ALIGNED with common (network) good. If you think you can bloat the system, all you do is reward other participants which increases the value of the network and attracts more participants. The network will (automatically) adjust the rewards and if resources are scarce there will be an arbitrage opportunity which the free market exploits (free market assumption is based on fungibility). The only way this logic would fail is if people suddenly lose interest in wealth, which is beyond the philosophical scope of this discussion.

Another point that @fred mentions further up is the redundancy of Moores law, does not apply to storage whatsoever. For somebody to attempt such an attack where they cause an immense amount of PUTs to then immediately go offline to never return will cost the attacker much more than the network, if the attack is targeted to a “popular” piece of data (read DDoS) the network only gets stronger and faster as every participant enforces the data. The supply of storage and accessibility due to privacy will make farmers very wealthy as the demand for both the coins and securing the network (farming) increases under such an attack (Need coins to create PUTs and storage providers will be needed, thus higher rewards if scarce). If the network was not autonomous this could be exploited through forced inflation with sudden cut of demand, such instance is taken care of by the network.

I personally dont see this as a valid issue/threat.


#57

Common misconception, but not quite right. Every time a chunk is requested someone has a chance to be rewarded. There is a cost paid to the network when a chunk is stored, not to a farmer. They get paid for serving data.


#58

Thx for the correction. Still the same, the network makes sure balance is there to incentivice sharing of resources (e.g. serving data).


#59

Not specifically related to the OP … but I am curious if anyone can explain 1) how the network knows if/when the total amount of vault space is running low and then 2) how it goes about increasing the via incrementing upwards the price paid to farmers to encourage the production of more vaults.

I’m guessing these questions haven’t been fully fleshed out in code yet, so expecting speculation based on white papers.


#60

Bingo.

When test safecoin is mapped out, or implemented then we will know what method they are going to use.