Network speed when it grows

Awesome to see the first version of the test network live :D. One thing I wondered about speed is how is it possible that the speed of the network grows when the network grows; if you have n vaults you need log(n) hops for the data to transfer from the vaults that are all randomly distributed over the world to the client, which with an average ping time of 500ms that would be 500*log(n). With a billion vaults that would amount to about 10s?

4 Likes

30 HOPS at max with 1 billion users, what I remember from this topic. But if 75% of all data is like what we have on BitTorrent and Popcorn-time now, caching would kick in and maybe that data is only 4 HOPS away on average. Let’s assume we have 2GB. we want to request from the network. That’s 2048 chunks. So the client requests data form the first 40 chunks, some chunks will come in after 2 seconds, others will take maybe 5 or 6 seconds, but a client can’t download all that stuff at once. So the slower chunks will be cached by our close nodes as they wait for our node to fix downloading the fast chunks. That’s how I see it.

4 Likes

Was just thinking, the network uses:

tag_type (64 bits)
identifier (512 bits)
owner_keys (list of owners)
data (up to 100 kilobytes)

So let’s say we get up to 1 million users (20 HOPS at max). Data becomes slower and the Devs really don’t like it. What could they do? Data on SAFE uses a tag_type and it’s own address space of 512 bits. So instead of having 1 tag_type for data, we create another one or maybe even 2 new ones and use 3 tag_types to store data at random. So now we have 3 different types of data with 3 times 512 bit addresses. This would bring the number of HOPS down quite a bit. I think actually to only 5 if I get this right…???

3 Likes

Keep thinking about this one, and especially about this calculation:

Some number
10.000 Users → 14 Hops Maximum
100.000 Users → 17 Hops Maximum
1.000.000 Users → 20 Hops Maximum
10.000.000 Users → 24 Hops Maximum
100.000.000 Users → 27 Hops Maximum
1.000.000.000 Users → 30 Hops Maximum

What about the number of average Vaults per user? Isn’t that a variable that needs to be in this calculations as well? Let’s say the average node has 3 Vaults as a standard setting for a PC. The maximum number of HOPS would drop down 7,5 for 1 billion users isn’t it?

I doubt it. If you were extremely lucky some of the hops would be between vaults of the one PC, but with it being XOR space it is doubtful. So the hops would remain the same lag for the most part. The #hops would not change.

In my opinion, this will end up being one of the most important factors which determines the success or failure of the SAFE network. Latency is very important from a quality of user experience perspective. Aggressive caching and the immutable nature of the data will go a long way, as will properly designed applications. However, low latency of the underlying protocol will be critical to wide adoption.

The primary contributor to the problem is the random nature of the XOR address space in the underlying DHT network. With the existing design, choosing recipients based purely on XOR location, the next hop is just as likely to be on the other side of the world as it is to be next door. The nature and distribution of global networks means that creating a bias towards communication between nodes with low latency can significantly reduce overall latency and bandwidth consumption, by a factor of 5x to 10x. This could be as simple as reserving a portion of the routing table for nodes nearby in latency space but widely distributed in XOR space.

I should note that Safecoin adds an additional layer of difficulty in finding solutions to this problem. The security and integrity of the network is heavily dependant on the guarantees provided by the node address assignment and the underlying routing layer. Any additional complexity added to the routing layer risks opening exploitable security holes in the upper layers of the network. Any routing changes will need to be closely vetted.

Having said that, I expect that as the SAFE network matures, the underlying routing infrastructure will naturally evolve and improve. If Bitcoin and cryptocurrencies have proven anything, it is that we still have a lot to learn about distributed networks and trust systems. There is constant innovation in the space and it is common to see clever solutions surface for seemingly impossible problems. I have hope that innovation will counterbalance the increased latency as the network itself grows.

One idea along this vane I have been exploring is Quarter-nets. In this design, the node address space is split into four equal subnets. The last two bits of your address determine your subnet. When addresses are assigned, the subnet is chosen based on the node’s latency as measured by other nodes within the network. Nodes within a subnet will be chosen to have lower latency compared to nodes in other subnets and will naturally be geographically nearby.
When DataManager groups are formed, they would naturally favor geographically diverse groups from multiple subnets. It may even be desirable to require a DataManager group be made up of one node from each subnet.
A node’s routing table would contain XOR nearby nodes in the wider network, the same as the current design. Additionally, it would reserve space for XOR nearby and XOR distant nodes in the node’s subnet. Nodes would prefer communicating within their own subnet when possible, while still maintaining the integrity of the wider network.

Such a design would provide the following advantages:

  • Improved geographic distribution of data replicas.
  • Increased probability that a copy of a given chunk is geographically close to the requesting node.
  • Drastically reduced average latency and latency variation on GET operations.
  • Reduced consumption of inter-continental bandwidth for the entire network.

There are a number of questions I still need to answer though. Including:

  • How to ensure roughly equally sized subnets
  • What will the natural geographic distribution of the subnets look like on a global network and will they remain stable
  • Are there attack vectors a malicious actor could use to distort the geographic distribution of the subnets and to what end
  • An attacker would be able to choose the subnet of their nodes using ping latency spoofing. How would this affect Safecoin security
3 Likes

Yes, as network size grows it creates opportunities for efficiencies because the larger number of nodes ensues security, and allows tweaks such as subnets add you suggest. I’ll bet David and team have some neat ideas about this too.

1 Like

This is a bit what I was thinking. Using tag_types like “Data 1” "Data 2 "Data 3 " and “Data 4”. When a hash of a Chunk is created and you upload to your close nodes, they will determine like you said based on the last 2 bits in which part of the network the data is stored. Let’s say the bits are 00. This means that you would upload this chunk to “Data 1”. If the whole network is 1 billion users (30 HOPS at max.) this is now split into 4 sub-fields when it comes to data. Your close nodes and all the datamanagers should use this system as well. So now you request a Chunk and when it’s in the first Quintile there’s only 250 millions addresses in that space. So you would get the chunk in a maximum steps of 21 step instead of 30. But correct me if I’m wrong with this idea, because I find it quite hard to come up with a solution and be sure what the outcome will be.

2 Likes

I don’t think a sharding technique like you describe will provide much improvement on a DHT based network. The way nodes are chosen for your routing table effectively splits up the network already. You should be able to get the same hop reduction as you describe by simply increasing your routing table by 4x.

The goal of what I describe is not to reduce the number of hops, but to bias the hops toward connections with lower latency.

2 Likes

If you increase your routing table you’ll add addresses that are close to you. So I doubt if this would be very helpful. It would be clear if all addresses in your table would be random. But this is where this project bugs my head. It’s XOR and closeness in XOR. So yes, the number of HOPS could be max. 30 with 1 billion users. But to know how this works practically is quite hard IMO. We have caching, closeness and what more.

1 Like

You have X nodes in the routing table and for every bucket (i.e. part of the network) where a node exists you will have an entry in your routing table. If you have a bucket with less than K entries, then this is likely actually the node you want to speak to in that area of the routing table. So you directly transmit the message.

However, 50% aprpox of nodes will be in the furthest bucket, so you may have 8 nodes there to speak to 50% of the network through. Ah in fact this paper goes into a good bit of depth (consider for us we have B as 4) http://arxiv.org/pdf/1307.7000.pdf It also shows some intersting affects when thought experiments lead folks to “improving” the algorithm in some ways (TL;DR it’s not easy and mostly wrong). Hope it gives more background though, although this is still iterative where we are recursive, it translates nicely.

5 Likes

What I get is that the number of 30 HOPS is quite unlikely with 1 billion users. That would be true if you knew only 1 other node and that node would also know only 1 node. In the case of SAFE with 32 we should be good in between 4 and 7 HOPS? Any data from the testnet so far on this subject?