MaidSafe chops files into blocks and stores each block on a minimum of 4 nodes.
As a file can contain thousands of blocks depending on the file- and block-size that means theoretically thousands of nodes can store blocks of just one particular (large) file?
I can image that as the network starts with a limited amount of nodes, a lot of blocks will be stored at the same nodes, meaning that for instance 10% of a file could end up on one machine and that percentage decreases as the amount of nodes increases, correct?
So if the network DOES grow to hundreds or thousands of nodes that means that unique blocks are stored on an increasing amount of unique nodes.
My question is; will there not be a problem with the amount of TCP connections that need to be opened to fetch all these unique blocks from all these machines. How does this work? Will my machine keep the connections open? Does the software open and close the sessions rapidly for each block?
Non of the above scenarios make sense to me as they will either hit OS ulimits for tcp maxcon or be very slow and/or expensive in terms of RAM and CPU.
I am probably making a wrong assumption somewhere but maybe someone can explain