This is not what I meant.
The node will have to catch up by receiving data already present in the group. This could take a while if it’s a lot of data. So, would it prioritise to catch up, or would it slice in some new chunks - if so, at what rate?
This is not what I meant.
The way that a new node gets the chunks it is responsible for doesn’t matter because it must get all of them (old ones + new ones added while it was getting the old ones). I would say that the most practical method should be chosen.
I think that current implementation is good: a new node receives from its neighbours the ids of existing chunks it must store. Then the node asks for the complete data from them, id by id and not all at once. This process is slightly parallelized (one request per data holder), and result data is returned asynchronously. This means that the node can receive new data in parallel as soon as it arrives, which is what we want.
Its the pareto principle. 80% don’t do very much while the 20% hit the throttle on the accelerator. Nobody understands why this is the case.
I think you make an important point. In the underlying structure of the network averages matter very much, but the participation by users will (probably) not be averages.
The pareto law is also an assumption which may or may not hold true, but it seems more likely (to me anyhow) to be a closer model for participation than the average law.
Gotta be one or the other if it’s valuable to someone then it’s valuable, right?!
If you mean it for individual vaults, which hold a set fraction of the network data, then it’s a tentative “yes.”
Tentative, because it matters if two vaults belong to the same farmer: Vaults owned by the same farmer have very fast network connection with a higher probability than otherwise, and if there are extremely large sets of vaults with LAN-speed connection, then the average bandwidth between vaults is meaningless because most traffic will occur at LAN speed but some at a much lower speed, so there may be a bottleneck much below the average speed.
Yes, it’s a likely assumption because everything similar follows the Pareto Law. However:
- It’s a model with infinite upper limit on speed and storage, which is unrealistic.
- Probably incorrect for small values. However, errors there have no significance.
- Uses a specific parameter for the tail exponent. Real value is probably different.
It’s good first approximation for moving the discussion to more realistic grounds.
I love the the idea! I hope it get better browser and standard support. Wake up FF and Google!!!
If someone could tell me, if half of the above is true, what does that mean for the safe coin price? I have nothing to hide - I am here for a profit.
Thats a discussion for the topic we limit price and speculation to. Ask specific questions there. I think yours was a little too general for an answer