Storage proceeding

  • That doesn’t destroy your data
  • According to @Seneca’s solution, all data integrity checks would pass just fine (since the data hasn’t been destroyed, and he said the checking is done network-side, i.e. close to vaults) :slight_smile:
  • They need to coordinate world-wide (next to impossible - you can’t even tell if someone actually is a farmer!)
  • Assuming co-conspirators aren’t lying and are indeed SAFE farmers, they need to be in this for some sort of gain. Maybe they’re short SAFE coin, but it is equally likely that some of them may go long to create a short squeeze on those who really want to go ahead with their plan
  • Long bet, too risky. I did a quick calculation - you’d need about US$1 million to pull this off, and then another question is what if it worked better than expected and everyone just left and didn’t come back (you’d lose earned SAFE, your new acquired SAFE would lose all value, etc.).

Yes, there are always risks, but like I’ve said many times here: it’s long tail stuff for which we can easily tell it’s extremely unlikely, but people like to discuss about (it’s exciting, etc.). On the other hand discussions about critical issues (like daily economics of the network) appear once a month (boring stuff).

1 Like

Ok, some figures from the horse’s mouth (sorry David!):

“So when we say 4 copies it can be 2-6 and 16 off line. It’s just easier to say 4”
ref

and

…based on older kadmelia networks like guntilla/emule where 8/20
replicas was enough, but when all connections were very light, i…e not
checked for many hours/weeks/months between churn events. As we are
milliseconds between churn events then the chance of 4 nodes going down
in the average churn event seems unrealistic. This is good, but
potentially too good, we may not need 4 copies (kademlia republish is 24
hours, refresh == 60 mins). 4 copies may be way too much IMHO.

The bottom line for us. is that we lose no data, beyond that is just more caching really and not necessary.
ref

2 Likes

Where those quotes from before or after the decision to go with non-persistant vaults?

Doesn’t non-persistent vaults eliminate “offline copies” ?

I’m not sure but you can check the dates of the posts.

No.

2 Likes

Both quotes where before the decision to go with non-persistent vaults. There are no off line copies anymore and there are 4-6 on line copies, if I understand correctly the following document:
SafeCoin Farming Rate:

A DataManager is a specialisation of a NaeManager. It has the responsibility of storing data and ensuring it’s integrity. Each DM group will monitor 2 copies of each ImmutableData type. There is a primary DM group, a backup DM group and sacrificial DM group for the three types created for every ImmutableData packet.

That makes 3 groups monitoring 2 copies each, but we can’t be sure that the sacrificial copies are always stored:

The third data type ImmutableDataSacrificial which is the network measuring stick. These types are only attempted to be stored, whereas other types MUST be stored. In the case where other types cannot be stored then copies of Sacrificial data will be deleted from the PMID nodes

1 Like

I don’t pretend to know this, but I think you are reading this incorrectly. I don’t think this means there are - or might be - no offline copies. To resolve it we need someone from MaidSafe though.

Backup copies are not yet implemented, so it is possible that these copies will be saved off lines. This needs confirmation from MaidSafe.

As I understand it, there will be offline copies (whenever a vault goes offline), but under normal circumstances those aren’t acknowledged and accepted by the network again when the vault comes back online. David has been talking about network recovery after a huge outage, which can be detected by the average density of addresses being much lower than before. Under such conditions vaults reconnecting to the network shouldn’t be wiped immediately, but first checked for any “lost” data.

1 Like

Used that quote to revive the discussion on storing data correction.

If the reference (see below) is still valid then when the chunk is being transmitted to the vault there is a measure of this in place, but not stored, just to ensure chunk reaches the vault quickly and intact. This is not storing extra error correcting info in the vault, just each chunk transmission error detection/correction.

The Rabin’s Information Dispersal Algorithm is not implemented and, possibly, never will.

1 Like