Non-persistent vaults

This is a future thing and doable. Not for now. The rest of your assumptions are indeed correct. XOR space is where the vault ID lives.

3 Likes

As David already mention, these are future considerations and not for now. I also want to repeat that your reasoning is spot-on. A new vault ID would push the vault out of the XOR range for those chunks that it had stored in a previous life-time.

A first disclaimer, I am not convinced this is an idea worth pursuing; the value of old stored chunks is debatable as the network will have maintained the redundancy. Life goes on :smile:

How I thought it can work is: you would function as an ā€˜archive nodeā€™ (what the PMID node is becoming). Currently PMID nodes/ archive nodes are kept close (inside) the DataManager group; but it is likely that we will loose that constraint; The DataManager group is by definition close to the data name, but the archive nodes (already in the upcoming sprint) will not (all) be close to that original location.

Thereā€™s something that annoy me about this new model, Iā€™m not sure how to put it.

Under this model, Iā€™ll try very hard to never gets disconnected. If I share with the network one Tera, filling it is an investment in bandwidth and losing it would cost me 2-3 months of my current data plan. But if I want to get the protection from it I should embrace getting disconnected. If I donā€™t, reading my private key from my ram or rom doesnā€™t change much since Iā€™m trying hard to keep my connection alive anyway.

It sounds counter productive to have these two concept tied together. It also makes occasional farmer an impossibility. You wouldnā€™t dedicate 50go to the network if you need to refill it every time you connect your laptop. Honestly, I think this feature will be a big barrier of entry for most people. Itā€™s a lot harder to sell.

Thoughts?

1 Like

I think you are probably right that this puts some pressure on giant vaults. As you say they can only refill at a certain rate. But bear in mind that from the networks perspective that is exactly what we want:

we want to encourage many average sized vaults from users, as they are using the network

if we donā€™t do this, the opposite happens: fewer and fewer, bigger and bigger vaults store all the data; this is a centralising force. For the network it is far more secure and efficient to regularly refresh normal sized vaults, then occasionally having to cater for a flood when vaults-too-big-to-fail do go offline.

And very important: we want to explicitly push for those normal size vaults to earn the normal amount of safecoin that offsets their usage. Otherwise the majority of the safecoin flows to a minority of dedicated farmers and we end up people all over the world having to buy safecoin at inflated market-prices.

6 Likes

I think the opposite. An occasional farmer in old way would get a very bad rank being off all the time. Think like skype where when your on you (used to be) valuable. Bitorrent work with highly dynamic data.

With this scheme even a mobile phone on for a short time may get an opportunity to farm and not have stuff left behind when it is not. Itā€™s potentially much more inclusive and if it actually prevented terabyte farmers would it not be good if instead we had millions of Mb or Gb farmers?

Apart from the security implications this scheme is likely much more fair to everyone. Previously going off line cost rank which was recorded in network state. This way now we reduce network state by 25% so increase performance
by the same.

2 Likes

This way now we reduce network state by 25% so increase performance
by the same.

Does this mean rank is no more? Or just not persistent, i.e. rank is a session thing?

If rank is still used, can you explain how it differs from before in terms of how it its earned and how it is used? Thanks.

1 Like

Yes rank is a session thing now, it may actually work out just fine with less work now as well. Rank will basically be have you been long enough to get a chunk to give. The longer the better and of course still following same sigmoid curve. Even if vault gets huge farming attempts stop increasing at 20% over the average.

Less work and less complexity than persistent rank. Very likely more fair as well as every start you have a chance to behave and bad behaviour is forgotten between sessions.

1 Like

Do you have a plan for archive vaults? Well they be special vaults, maybe gaining a persistent rank after meeting certain criteria?

Yes, but these days we are just motoring like mad to get testnet and safecoin etc. up and running so focus is very tight, no mind time for anything else. I am trying to get everyone up to speed with the current network to move forward. getting there but lots of talks meetings and reviews etc. but happening.

1 Like

Thanks for explaining thus far. Its useful for me, and explaining of course.

Personally I donā€™t consider a 50 GB vault giant, while that is way too much data to re-download every day for home farmers.

One risk that I see is many early adopters setting up dedicated farms, setting a very high network average from the early days of the network. Joining as a casual vault that gets emptied daily wouldnā€™t even be worth considering in such a situation, and if no casual vaults join because of this reason the network average will stay high.

If we want to have small vaults viable, shouldnā€™t the left side of the sigmoid curve actually be linear?

1 Like

if early adopters start off with offering too much supply, the safecoin farming rate will stagnate;

The push for non-persistent vaults is part of a bigger effort to make every device capable to contribute to the network, big or small. Whether itā€™s a phone charging in Cape Town, my fridge or your dedicated farm.

Personally I donā€™t consider a 50 GB vault giant, while that is way too much data to re-download every day for home farmers.

So wouldnā€™t it be better to make it possible to earn safecoin with less than 50 GB? Remember that the driving motivation for safecoin is to distribute honest contributions: if you use as much as you contribute all information services should approach a zero cost; enabling everyone to take part.

It has to be clear that the non-persistent vault does not mean that your vault is forced to restart every day. If you have a desktop computer with a 1TB drive attached to it, then because of the non-persistent vaults you can already start earning on day one with a fraction of that storage used. But if your computer stays connected, sure over a few days, a week, your 1 TB might fill up and you will earn more, but not amazingly more; it should flatten out rather quickly, because an average daily usage is significantly less than that 1TB.

The value a new vault is adding by oversizing compared to average usage, is much less than the equivalent space offered by vaults of average size combining to that total size. This is because bigger vaults exactly introduce scarcity on CPU power and bandwidth to the network, in exchange for the cheapest part: disk space.

1 Like

Alright, thx for the answers. Itā€™s gonna be interesting to see how it all pans out.

Side question. Is it possible to set the amount of bandwidth your vault is allowed to use?

1 Like

Not at the moment. Is this something you think is required? It seems some people have bandwidth caps which is insane to us over here in this side of the pond. Seems we will have some extra rules for bandwidth capped nodes by the looks of it. We can add to CRUST lib, but man you fellas need to get your ISPā€™s told, caps are just beyond belief. It is how it is though so I suppose we will have to add some cap.

Do you watch youtube / netflix etc. as that alone would take way more b/w than a node I would hope. We need to measure though :wink:

3 Likes

caps are more common than they should be. I know quite a few people with 250GB caps on they internet monthly. For a 50GB vault that gets restarted two or three times a month, that will eat away at it really quickly.

2 Likes

Is there some notion of cap verses speed?

No download capā€¦ Iā€™m shocked!

So yeah we have monthly caps, but while this one is a concern, what I was talking about is a way to control the transfer rate of my vault so it doesnā€™t clog up my network when itā€™s trying to fill. Now if the fill rate of a vault is expected to be very slow itā€™s not really a concern but if a vault starts downloading chunks like crazy Iā€™d like a way to slow it down so I can still use the internet for other things. Just like a torrent client where you set the target download speed.

3 Likes

I really do not foresee vaults being pummelled, unless the farming rate is set very badly. This is certainly a measurement we will make for the capped connections. Itā€™s like Guttenburg did a press and you are only allowed to read X amount per day, very weird for sure. You should not accept a cap as normal, itā€™s very wrong.

We will check the testnet specifically for b/w usage for sure and take it from there I think. I would be surprised if itā€™s excessive at all unless you run many vaults behind your ISP. I would hope much less than watching a few netflix movies in hd per week. We will know very soon though.

1 Like

Yes this will be possible,

1 Like

This is fact, throughout Australiaā€¦I have a 150GB monthly cap which is at the high end.

With a maximum metered 1Mbs uplink (ADSL2+), it will take years to upload my data to SAFE.

Just this restriction would make farming an unequal contest I would suspect and greatly diminishes the potential of SAFE on this continent.

This is why previously I have questioned the peering relationships that certain ISPā€™s have and whether those relationships could yield uncounted bandwidth when traffic is between physical SAFE nodes within these relationships.

1 Like