Qty of files on the network

I read on another post that there will be 8 copies of a file on the network, just got a couple of questions…

  1. Does the network know if a fragment of one of those copies goes offline and does it then create another copy to ensure there are always 8 full copies?
  2. If the farm comes back online, is a copy of the fragment deleted to maintain the standard qty of 8 full copies?
  3. If a file is popular, are the fragment locations periodically changed to prevent excessive bandwidth usage on certain farms?
3 Likes

Yes, quite quickly.

With node age, when a node comes back on line it’s relocated to another part of the network at random, for security. It loses half its age as well.

The network caches the chunks along the route requested, it’s thing called an LRU cache (Least Recently Used) so does get forgotten if not read for a while, but is very current if popular. So popular chunks come from along the path and out of RAM (not disk).

15 Likes

Many thanks both, Settled my curiosity!

3 Likes

I wonder if there’s an easy route to providing assurance to users with some validation of that reality. Seeing is believing.

Yes, not simple, a running efficient network is the best we can show. It’s like asking Tesla/Eddison to show electricity really. Best to show what it does I think. Trying to show in some graph etc. can be very bad for security. Ok in test mode perhaps, but …

Not sure how to show that apart from showing the outcome? Logs though on the testnets show this happening? so ,… there is “evidence” but nothing clear for users to grasp easily AFAIK.

4 Likes

What are you referring to precisely? Does your concern include visualization of the safe network with its sections and nodes or, displaying estimated statistics like size stored, number of MDs …?

And why is it bad? Remember that security through obscurity is in reality the bad practice.

1 Like

I 100% agree, but also providing information not required is a security vulnerability in the making. If it went as far as IP’s then it’s obvious. For logging to visualisers then its centralised etc. So lots of holes to fall down, that we don’t need to. However in testenets etc. its probably Ok. Work you hve done trying to find stuff out is good tho as we need to see what we can figure out. However the logging will have to go through sec audits as well. All good though.

8 Likes