Community-run Testnet Info!

Yes. that was because I had another on port 5483, too – didn’t work either. According to the crust example configs, the ports should really not matter as long as they match (which they do).

@bluebird: you’ve seen the config file (I posted it before) and I described what I was doing (with the help of @polpolrene) and all works fine until step 6 (which I usually skip) but then we are stuck at step “7. If that test works” – but it doesn’t! That’s exactly what we have tried out. Do you have any idea how to do some troubleshooting if it doesn’t? Because that’s where we are stuck right now (with configs pretty much like you described them).


I am also running it on bare metal on my server now (IP as before), but connecting from home doesn’t work either. I was thinking maybe the docker container is blocking some ports (though it worked fine before from within) that are needed to establish at the beginning, but I can’t get them to connect – DEBUG logs don’t help.


Not sure what complexity there is. It is just a little tricky right now to figure out how to get it to run.
Personally the reasons to get one up and running outweigh:

And why wait for/rely on the maidsafe team to do everything? They have clearly plenty of work on their hands and they are busy getting all that forward. There is no harm in the community picking up tasks like this. Always waiting for them to do everything means just increasing the pressure on them but doesn’t get us any further faster.

3 Likes

Perhaps they would have explained how to do it, if they hoped people would do it. I am sure the devs could easily explain what people here are trying to derive.

If they are trying to test the software, adding a network of mixed version/state is going to add complexity. This will need dealing with in due course, but now may not be the time.

But hey, go-ahead, fork the network and do as you will. If it doesn’t impede progress, then there is no issue.

It won’t add complexity, and they will be completely separate networks, if any one of the following conditions hold:

  1. Disjoint sets of IPs are used

  2. NETWORK_VERSION is set at compile time.

  3. An older version of safe_vault is used.

For my own testing I’ll maintain at least two of those three conditions.

@dirvine has been helpful and has not voiced any misgivings other than asking that we not touch the IP addresses of their droplets between official tests.

He also said they are busy and are short of human resources to help with community testing at present.

So there is no issue.

2 Likes

Yes… and 1 and 3 will hold when the core devs seeds fall away, at their end of interest in each test. That leaving the remainder of the network to collapse or to continue in the case there are enough nodes aware of each other and then networked together.

I wonder strictly that there need not be any seed nodes once the network is up to have some of it live on for a while; it’s just reviving it from the dead and new nodes joining that requires seeds.

I suspect the issue above is a port not open to external prompts … whole punching I expect is a outgoing data heading to a seed, opens a port that then allows incoming response but a seed needs to receive before knowing where to send.

So, nodes who loose sight of live nodes and then new nodes, will need to be suggested some persistent nodes to look to in place of the old seeds… and for that then the static port that port forwards to a seed vault. Once the cache is available fewer nodes will lose sight of the network at the of a test.

Was port forwarding done for @lightyear example?.. if it was I’m out of ideas but for trying again next round.

I HAVE IT WORKING!

I way overthought it, sorry about that. :blush:

To join the very first ever SAFE Network Community Network (TM, Tadaa!), please get the redistributable for your system, unzip it and replace the contents of the safe_vault.crust.config with this and post your results here.

{
  "hard_coded_contacts": [
    {
      "tcp_acceptors": [
        "91.121.173.204:5483"
      ],
      "tcp_mapper_servers": []
    }
],
  "tcp_acceptor_port": 5483,
  "service_discovery_port": null,
  "bootstrap_cache_name": null,
  "tcp_mapper_servers": []
}
10 Likes

Here is what I get:

INFO 18:28:26.604188000 [routing::core core.rs:1198] Running listener.
INFO 18:28:26.708137000 [routing::core core.rs:1768] Sending GetNetworkName request with: PublicId(name: a236..). This can take a while.
INFO 18:28:26.809474000 [routing::core core.rs:1553] Client(f19d..) Added a60b.. to routing table.
INFO 18:28:26.810443000 [routing::core core.rs:407]  -------------------------------------------------------
INFO 18:28:26.810451000 [routing::core core.rs:409] | Node(f19d..) PeerId(0817..) - Routing Table size:   1 |
INFO 18:28:26.810455000 [routing::core core.rs:410]  -------------------------------------------------------
6 Likes

Yep I see the one new entry in my routing table. You’re the first after I removed my home nodes. What you see is my dedicated server.

6 Likes

Looks like someone else joined!

INFO 18:31:25.240272000 [routing::core core.rs:392] Node(f19d..) - Indirect connections: 1, tunneling for: 0
INFO 18:31:25.645936000 [routing::core core.rs:1553] Node(f19d..) Added 21ad.. to routing table.
INFO 18:31:25.645988000 [routing::core core.rs:407]  -------------------------------------------------------
INFO 18:31:25.645996000 [routing::core core.rs:409] | Node(f19d..) PeerId(0817..) - Routing Table size:   2 |
INFO 18:31:25.646000000 [routing::core core.rs:410]  -------------------------------------------------------
6 Likes

Yes, I see two now.

The secret (sounds dumb to say it) was to open port 5483 on my firewall (iptables), and keep it simple, no tcp_mappers or anything, and then just listen while I ran my home nodes with the same config. But the connecting nodes don’t need to change anything on their firewalls.

But it might speed up the growth of the network to open port 5483.

Six nodes now!

Someone running a launcher.

For example, try adding your own IP to the hard-coded contacts, and open port 5483. Then my node could go down without collapsing the network. I don’t know whether adding IPs matters, and probably not, if a few nodes have 5483 open and their IPs are cached by other nodes.

I’ll leave it running while I go out for a while.

It suddenly dropped to two nodes. I’ll leave it and check back in half an hour.

7 Likes

This is great, I’m in as well. Perhaps we should update the original post to add the config information?

Again to highlight, the directions are quoted below.

3 Likes

:tada: finally thanks @bluebird

3 Likes

On my way. All mid!!

3 Likes

I just restarted the vault. Still only one so I’ll restart the computer.

Hmm, no incoming connections. Anyone trying?

Three in my table now. Did someone restart?

Just to note again that your node can become a seed, and thus make the network more robust, by:

  1. opening port 5483. On a *nix box that is done with
       sudo iptables -I INPUT -p tcp --dport 5483 -j ACCEPT

…but that only lasts until your next reboot. Making it permanent is beyond the scope of this note.

  1. Edit the config that I posted above to add an entry for your internet IP, paying attention to the syntax.

  2. If you’re on NAT then add port redirection, for port 5483 to your LAN IP, in your router.

  3. Give that edited config to someone else to run with.

I’ll restart it since my terminal session has frozen. I’ll start two or three instances.

3 Likes

When you have it at a point where people can connect very simply 1, 2 3 go can you start a new topic (maybe Join the community network) with the required info in the opening post. Prefer that people do not need to open posts on their routers.

Its kinda of lost in here and if some one isn’t actively looking they might miss this great effort by everyone here

3 Likes

I’ll do that.

It’s optional for them to open the port. Opening the port means the network is less likely to collapse when my node is unavailable.

4 Likes