I watched the video (interview by Maria) and read the [wiki].
I don’t understand how other nodes take over their ‘server’ role for providing stored data.
Let’s say I have a file of ten yottabytes named X and send it to the safenetwork.
It will get chunked into at least 3 pieces and 4 copies (or more if chosen for more redundancy) of each chunk which are encrypted and sent to 8 nodes.
What if some nodes go offline? The network issues a replacement of the chunk to an other node.
I do understand that a node going offline deletes the data chunk. Isn’t this a waste of energy and bandwidth? Intuitively I believe it’s better not to delete chunks from nodes that have a historical 99.9% uptime. Namely, if all nodes have 99.9% uptime there’s less than 1 ms downtime per day for the client.
Has there been any math done about the economy of rewards, node uptime, and availability? For instance, how much time does there need to pass before copies are issued to other nodes? Is there an algorithm to delegate where and when which chunk (by size) is stored?
I can imagine nodes that are negatively correlated in epoch uptime time windows can economically secure a better safenet without needless copying yottabyte-sized chunks of data across the network.