Launch of a community safe network

No problems, I understand this a low priority issue.

Any comments on what seems to be a sequential upload test where the nodes will normally be doing concurrent connections in the upload direction, and that there should in my estimation, at least 2 or 3 concurrent uploading in the resources test?

Hi, @neo ,

We are currently investigating this.
In additional to the full logs from you that can help us pin a possible issue (@DGeddes may contact you for this ).
We are also checking code on the routing/alpha-2 branch.
Looking at the code block at sn_routing/node.rs at alpha-2 Ā· maidsafe/sn_routing Ā· GitHub,
the total size for resource proofing is 250M, with targeted size changing according to the section size.
The total resouring proofing procedure timeout is 300s (defined at sn_routing/resource_prover.rs at alpha-2 Ā· maidsafe/sn_routing Ā· GitHub)
So, in theory, the required upload bandwidth shall be less than 1M/s ?
I donā€™t understand where your 30Mbits/second requirement comes from. Could you give a more explaination of your calculation? thx.

btw. The 30s is just an interval for print out infos.

2 Likes

But the behavior of the resource test seems erratic. In my case, despite a symmetrical 600/600Mbps connection dedicated exclusively, sometimes the test gets stuck at a certain percentage and the node is rejected. And that happens several times before it is accepted.
It is even common that a high percentage is completed in the first 20 seconds and then the test seems stalled.
My feeling is that, in these cases, the problem is not in the test node but in the other side.

That is not so much a measured requirement but comes from my tested upload capability. More than 30Mbits/sec to a server in another country (I am in Australia and server in New Zealand). Does the upload test involved one connection or does it have multiple upload connections for the data? Maybe its done as a sequential series of upload blocks?

That explains why the upload %age stops at 110 seconds remaining.

For me its either 14% or 15% reported at the 30 seconds then 6 to 8% more for each 30 seconds.

Iā€™ll see what logs are on the Odroid I am using for this. How many do you want?

1 Like

There is no such direct parameter. When David said that upload bandwidth needs to be at least ~ 6 Mbps that doesnā€™t mean anything because it depends also on the CPU power. If the test was done on a powerful CPU then a vault with a less powerful CPU would need to compensate with a higher bandwidth.

Resource proof is controlled by 2 parameters that didnā€™t changed since Jan 2017, which means before test 16 (launched in April 17):

/// The number of required leading zero bits for the resource proof
const RESOURCE_PROOF_DIFFICULTY: u8 = 0;
/// The total size of the resource proof data.
const RESOURCE_PROOF_TARGET_SIZE: usize = 250 * 1024 * 1024;

Difficulty parameter is used as-is, but target size is modified by the current section size:

RESOURCE_PROOF_TARGET_SIZE / (self.routing_table().our_section().len() + 1)

This formula is very strange because it means that it is harder to join the network when the section is smaller. For example, the requirements are currently stricter than 2 or 3 months ago when the network had one section with about 20 nodes. IMO there shouldnā€™t be such dependency on the section size.

It will be multiple connections.

There will be a series of upload blocks.

Yes. However the 6Mbps mentioned is somehow a min requirement (i.e. assuming no delay on difficultiy calculation).

I donā€™t think it is the formula makes so.
With more nodes, there is tolerance that if the existing node is slow and failing the test. Which may gives you wrong impression that smaller section is harder to join. It is maybe just the left over nodes in the smaller section are having more slow nodes.

I donā€™t understand at all. Why would the smaller section have more slow nodes? And what is the link with the formula?

It is basic math: If a section has less nodes then the denominator in the formula is smaller and so the target size is larger. Which means that the joining node must use more CPU cycles and more bandwidth to compute and upload the larger hash.

I only want to mean the case that: 3 slow nodes among a section of 10, then all 3 remained in a smaller section of 8. Number remains the same, however the percentage increases.

The CPU cycle is mainly decided by the difficulty, not the size.

The total size of data to be uploaded remains at 250M.
Itā€™s just you give 10M to 10 peers or give 100M to 1 peer.

1 Like

This is speculation. I could say the same for fast nodes, and in fact I might be right because slow nodes are more prone to crash and then take a longer time to restart (I am talking about hours instead of minutes), so their owners are more likely to give up.

The number of CPU cycles are proportional to the size (except for the 1 offset). I checked this with the test program provided in resource_proof crate. The problem is that I have a relatively fast computer (Intel(R) Coreā„¢ i7-8650U CPU @ 1.90GHz) and I get meaningless values below 1s for configurations with Maidsafe default values.

But I am sure that for some nodes taking hours to connect these values might not be negligible. I wont mention names but I know someone in a data center in this case and I suppose this is due to a not powerful enough cpu.

Can anyone with a low power device (like odroid, raspberry, ā€¦ ) use the resource_proof program and give their results?

The parameters for a section with 9 and then 19 vaults are:

time target/release/resource_proof -d 0 -s 26214400
time target/release/resource_proof -d 0 -s 13107200
1 Like

Odroid: It takes 3 or maybe 4 seconds of elapsed time to create the proof data for 9 nodes. How much CPU time I am not sure. From the logs its obvious that its done in parallel as noted by the elapsed time.

T 19-04-12 10:58:59.833171 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 1 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [181, 51, 203, 51, 12, 145, 213, 79, 213, 182]
T 19-04-12 10:59:00.328483 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 1 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [106, 106, 75, 173, 219, 218, 36, 11, 173, 143]
T 19-04-12 10:59:00.393758 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 1 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [111, 26, 94, 169, 159, 151, 147, 249, 200, 126]
T 19-04-12 10:59:00.634563 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 2 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [240, 67, 84, 2, 122, 158, 202, 185, 220, 172]
T 19-04-12 10:59:01.294751 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 3 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [49, 240, 64, 28, 153, 150, 108, 227, 95, 18]
T 19-04-12 10:59:01.404607 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 3 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [240, 181, 230, 140, 224, 185, 9, 240, 156, 155]
T 19-04-12 10:59:01.557527 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 3 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [148, 203, 49, 47, 135, 52, 199, 134, 151, 152]
T 19-04-12 10:59:01.617330 [<unknown> <unknown>:158] Node(a6e8fd..()) created proof data in 3 seconds seconds. Target size: 23831272, Difficulty: 0, Seed: [122, 23, 112, 153, 19, 222, 230, 0, 172, 229]

@bart was that version of the node software you compiled compiled in debug mode or release mode

1 Like

Just wanted to break into this genius dialogue thatā€™s wayy over my head to say you guys are legends, and I love how much progress, testing, and learning seems to be coming from your work here with the community test network.

Even if you get stuck or discouraged, donā€™t worry, this still looks to be millions of miles beyond what we accomplished last time we set up a community-run net with @bluebird Etc years ago (no fault of his, but just shows that the tech has come a long way). This thread seems so much longer, in-depth and fruitful.

You were mentioning cost earlier @tfa, which got me thinking, could we make a list of a few ā€œdecentralizedā€ people here who grasp the process of setting up cloud nodes for this, and they can post how much it costs to expand the community network with nodes from different cloud services, and then the community can be able to donate to each of them to expand and keep it going?

Also good to see you again @qi_ma :slight_smile: so many unspoken, quiet heroes grinding away in the darkness for our security, at MaidSafe

6 Likes

Ok, my hypothesis about CPU time is wrong.

Another one could be not enough contact nodes (we went from 9 nodes to 4 nodes). Maybe, thatā€™s not enough pipes to upload the results.

2 Likes

Well the number of peers the vault establishes connections to is 10 just now

10 is the total number of nodes. But the number of contact nodes is only 4 and I donā€™t know if the connections are direct or pass through these contacts nodes.

2 Likes

I was just letting you know the number of peers. I knew they were not the contact nodes.

Iā€™m pretty sure it was in the release mode, but I can build it again if you want to be certain. Also, I was running it myself on my Pi and it had no trouble connecting to the Network from there.

4 Likes

I am trying to make this work on a XU4 and I have a HC2 ready to go. I am wondering if you could list the OS you are using, anything different to normal running of the vault and any firewall changes you did please.

Iā€™ve used a Debian Stretch image from this topic
https://forum.odroid.com/viewtopic.php?f=96&t=30552
Didnā€™t have to do any extra (firewall) configuration to get the vault running.

1 Like

I added temporarily 5 nodes, and @neo made some tests with his Odroid. At first, it was getting to 98% complete but then it succeeded to join the network (and is still running right now).

The conclusion is that the needed bandwidth is sensitive to total number of nodes, but not number of contact nodes. Also, there might be a problem related to a timeout message from the peers during the resource testing.

2 Likes

What was missing at that time was the invitation system to prevent users with duplicate accounts overloading the network with too much data. Also, launching a network needs preparation with a lot of tests:

  • Figuring out how the invitation system works
  • Making tests with various values for min section size parameter (only to discover that the smallest value that works is 8)
  • Testing SB and WHM

Personally, I donā€™t like this idea because users are not in control, and also, I donā€™t want to be involved with user funds. What I propose instead is that I provide my ssh key and users add it to servers they create and pay themselves. Then they just PM me their IP addresses and I do the rest to create the vaults in their servers.

With this system users are more in control:

  • They can check what I have done by entering the history command in the console
  • They can delete their servers whenever they want without asking me.

If anyone is interested here is my key:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDoVQOeGD/GiWCRxSCQ8dHFR3AiKr5r5wWD8U6aLJksa98tGKdp5Pv0CuLImOTJdc7zdUnTJ69O2Npac/B1otZaYBNi8qw9p85V4LEofSuiOd0YA6NUFvfhjLICpPO5g1Phl3j+oV+33ERUHMvR9COvwrwAioVqIUepAB37w194mG/qAegEUY05fnUq37ryiahuJNq1eBd2cd877bdxYefLarDjXUQRrXDKAC7X4eK36ldOcntk03EhRasLhjexTJdUGyadMX/mDu1ZQuFNwlkiZ0XryyfRL/Z/QpybtWMYRLwRBNBaCMJlTBYanulGSj/K2sDeWqjZijZrMVm/FpCoDEcTapG0BI7/1hW/D8Ttn8xeAAwY/AnjxGQkRr81rj4b6YxwqqIswvdJGuw18vJINPx4lM+QZt4vosgGk7no87rQIBmRoglTbAOcQ5yAXqDfPY8KSOSmbUdUe/W28mTQigXTeI1Lbvhsc0nXv0+/d/YHY2v7Ggs6OU+Ek397kuKPSaTbHg0gYnwXDJPBiKzJUorkaFK1a2oJyPHv4Z7toStc9p2DbtrOVeHze2P03Ojif3RoGwgvAvw2L4T9opcjMMkeAgs/Qr8VpsIOZ6J3cA3Dpu0i2Uc4E3pGV4jSoSXdUdqxM5G+PSbG8RK7BpDQyDIsHD07vgM3yKTUQcZzgw==

For Hetzner provider the procedure is the following:

Add key to account:

  • Go to the "Default" project
  • Click "Access" in the left menu
  • Click "ADD SSH KEY" button on the right
  • Paste the key value in "SSH key" field
  • Enter any name in "Name" field
  • Click on "ADD SSH KEYā€ button

Create a server:

  • Click "Servers" in the left menu
  • Click "ADD SERVER" button on the right
  • Select any location (Nuremberg, Falkenstein or Helsinki)
  • Select Ubuntu 18.04
  • Select CX21 type
  • No "Volume"
  • No "Additional features"
  • Select the name of the SSH key created above
  • Give a name to the server. If you enter a name with a pair of 2 dashes, then the prefix before this pair will be your name in the Honor Roll of my web app. For example, with BIGCORP--00 your name will be BIGCORP.
  • Click "CREATE & BUY NOW" button

For other providers it should be similar (first add the key to the account and then associate it to the server you create).

After that just PM me the IP address of your server and I will launch a vault on it.

7 Likes