Corrupted Data Map?

If the data map is locally hosted in the client, what would happen if the map gets corrupted? Is there any safeguard against this?

It is not locally hosted, but stored on the network after checking. There is also a backup for any corruption where the login will select the backup login packet.

Nowadays there is a version of all old sessions maintained, so you can ‘go back in time’ and see old sessions as well as be able to see the history of your file-system (like the Apple time machine).

8 Likes

Awesome!
These are questions that popped up by reading the SystemDocs.
I have one more question:
What determines the quantity of chunks? The Docs mention that it will be a minimum of 3, so what determines the division to 4, 5 or n chunks?

Then it mentions that after xoring with each others hashes, it further breaks down every chunk to 32 pieces.
So if there were originally 3 encrypted chunks, there will be 32 parts of chunk1, 32 parts of chunk2 and 32 parts of chunk3?
Why 32 microchunks?

I am sorry if this was already discussed.

The number of chunks is based on 100% availability, there will likely be min 6 copies with many more in cache and also in off line nodes. The 6 copies will be over 3 DataManager groups and by default geographically dispersed (evenly). The change from 6 copies to 4 will increase network Farming Rate while the change from 4 copies to 6 will decrease the Farming rate.

The microchunks issue is a scatter gather mechanism and we are not even sure it’s now required, but testing will show for sure. It’s a transport efficiency for sure though. As chunks pass through groups of 32 nodes it’s better to pass 1.32 to each node rather than 1X chunk 32 times.

5 Likes