User run network based on test 12b binaries


It wasn’t working for most people at first yesterday. If you try again with the config file above you might have more luck. Don’t use the -f option though.


Check you have the latest IPs from posts above

No need for this - the network is running[quote=“4M8B, post:300, topic:12487”]

  • same network name in config file for every vault
    Make sure you are using community_network_feb_2017

EDIT @JPL beat me to it :slight_smile:


What was in your log? This one?


I’ll take a look this evening to see if it was this error.


this defo works as i type this:

“hard_coded_contacts”: [
“bootstrap_whitelisted_ips”: [],
“tcp_acceptor_port”: null,
“service_discovery_port”: null,
“bootstrap_cache_name”: null,
“network_name”: “community_network_feb_2017”

INFO 11:41:09.814440873 [routing::states::node] ---------------------------------------------------------------
INFO 11:41:09.814461559 [routing::states::node] | Node(8a5cb9…()) PeerId(ecca0561…) - Routing Table size: 15 |
INFO 11:41:09.814466850 [routing::states::node] ---------------------------------------------------------------
INFO 11:41:09.814517185 [safe_vault::personas::maid_manager] Managing 4 client accounts.

everything seems very stable and happy…



You have a double dot in “185.16…37.149:5483” :frowning:


I’m doing some tests with @polpolrene and @Viv

I’m using my AWS node so best that it gets removed from the seed nodes for now.

are stll OK though


Except that screen retains output from the process… and & perhaps loses content. I suggest Screen well worth learning for remote server… it is simple as knowing if you are in the screen or not and Ctrl-A + D to exit and leave it running.

Incidentally the other ‘trick’ I learned is for programs that might fall over there’s the option of putting to a do while loop that sees them rise again… though I doubt we need that for SAFE; unless the way droplets reboots does something I’m not familiar with and needs that.

eg (perhaps Linux only)
while true; do ./SOMETHING; done;


Screen is worth learning of course.
could loose output, but not
nohup ./safe_output&

I was the last days also thinking of an infinite loop, in case safe_vault stops.


Ok I will take my vaults down one at a time so I can try

nohup ./safe_vault


I’m in also from coming from digitalOcean. (

First I got this error

  `Running safe_vault v0.13.0
ERROR 19:26:24.403888901 [crust::main::bootstrap] Failed to Bootstrap: (FailedExternalReachability) Bootstrapee node could not establish connection to us.
INFO 19:26:24.404576934 [routing::states::bootstrapping] Bootstrapping(11a7cd..) Failed to bootstrap. Terminating`

Just by disabling the firewall everything went smooth.

Running safe_vault v0.13.0
INFO 19:30:48.691298844 [routing::states::node] Node(54b034..()) Requesting a relocated name from the network. This can take a while.
INFO 19:30:49.114250743 [routing::states::node] Node(51f10a..()) Received relocated name. Establishing connections to 18 peers.
INFO 19:30:49.275644816 [routing::states::node] Node(51f10a..()) Starting approval process to test this node's resources. This will take at least 300 seconds.
INFO 19:30:49.600789256 [routing::states::common::base] Node(51f10a..()) Connection to PeerId(9171505e..) failed: PeerNotFound(PeerId(9171505e..))
INFO 19:30:49.601850244 [routing::states::common::base] Node(51f10a..()) Connection to PeerId(9171505e..) failed: PeerNotFound(PeerId(9171505e..))
INFO 19:30:49.602464063 [routing::states::node] Node(51f10a..()) Dropped a210fd.. from the routing table.
INFO 19:31:19.114452179 [routing::states::node] Node(51f10a..()) 0/17 resource proof response(s) complete, 35% of data sent. 380/410 seconds remaining.
INFO 19:31:49.114614680 [routing::states::node] Node(51f10a..()) 2/17 resource proof response(s) complete, 79% of data sent. 350/410 seconds remaining.
INFO 19:32:19.115229545 [routing::states::node] Node(51f10a..()) 15/17 resource proof response(s) complete, 93% of data sent. 320/410 seconds remaining.
INFO 19:32:49.115605249 [routing::states::node] Node(51f10a..()) 16/17 resource proof response(s) complete, 94% of data sent. 290/410 seconds remaining.
INFO 19:33:19.116584675 [routing::states::node] Node(51f10a..()) 16/17 resource proof response(s) complete, 94% of data sent. 260/410 seconds remaining.
INFO 19:33:49.116731598 [routing::states::node] Node(51f10a..()) 16/17 resource proof response(s) complete, 94% of data sent. 230/410 seconds remaining.
INFO 19:34:19.117090665 [routing::states::node] Node(51f10a..()) 16/17 resource proof response(s) complete, 94% of data sent. 200/410 seconds remaining.
INFO 19:34:49.117418745 [routing::states::node] Node(51f10a..()) 16/17 resource proof response(s) complete, 94% of data sent. 170/410 seconds remaining.
INFO 19:35:19.117804766 [routing::states::node] Node(51f10a..()) 16/17 resource proof response(s) complete, 94% of data sent. 140/410 seconds remaining.
INFO 19:35:49.121802528 [routing::states::node] Node(51f10a..()) 16/17 resource proof response(s) complete, 94% of data sent. 110/410 seconds remaining.
INFO 19:35:49.205122001 [routing::states::node] Node(51f10a..()) Resource proof challenges completed. This node has been approved to join the network!
INFO 19:35:49.205405731 [routing::states::node]  --------------------------------------------------------------- 
INFO 19:35:49.205538604 [routing::states::node] | Node(51f10a..()) PeerId(a22ae0bb..) - Routing Table size:  17 |
INFO 19:35:49.205689333 [routing::states::node]  --------------------------------------------------------------- 
INFO 19:35:49.237404605 [safe_vault::personas::maid_manager] Managing 1 client accounts.
INFO 19:35:49.282497172 [safe_vault::personas::maid_manager] Managing 2 client accounts.
INFO 19:35:49.470632217 [safe_vault::personas::maid_manager] Managing 3 client accounts.
INFO 19:35:49.483434856 [safe_vault::personas::maid_manager] Managing 4 client accounts.
INFO 19:35:49.617828049 [safe_vault::personas::data_manager] This vault has received 0 Client Get requests. Chunks stored: Immutable: 1, Structured: 0, Appendable: 0. Total stored: 411 bytes.


The routing tabie size at 23:15 GMT is 16.
How many more vaults do we need to see a section split?


At least two, but it might need even more if the existing nodes aren’t evenly balanced across the address space.


Thanks @Fraser

OK troops, any more voluteers to run a few vaults?
I’d like to see if we can manage a section split and to see if the AWS nodes can handle the inevitable load.

Check the top post for instructions from @neo
The only thing I would add is you probably want to reduce the max vault size safe_vault.vault.config in if you are using the AWS free-tier service so you don’t overflow the free 8GB disk allocation.

I will be taking one vault off-line briefly to snap shot it , then launch another clone or two if some of the rest of you will join in.


Outstanding effort @neo and @maidsafe community. Just awesome watching the “boot” get put (pardon the pun) in “bootstrapping”


i get this error when i run my vault

thread 'main' panicked at 'Unable to start crust::Service ConfigFileHandler(JsonDecoderError(ApplicationError("Failed to decode SocketAddr: invalid IP address syntax")))', /media/psf/Home/Dev/Rust/routing/src/ note: Run with `RUST_BACKTRACE=1` for a backtrace.


@lostfile Check you have quotes & commas & other characters correctly placed.


cor that’s weird…

i copy and pasted it from my working config where there is no double dot…

sorry if it caused confusion…

looking good so far!

INFO 10:40:07.258945079 [routing::states::node] ---------------------------------------------------------------
INFO 10:40:07.259071280 [routing::states::node] | Node(8a5cb9…()) PeerId(ecca0561…) - Routing Table size: 18 |
INFO 10:40:07.259158595 [routing::states::node]

rup :wink:


Not a problem :slight_smile:
The double dot was easy to spot when you are used to looking for that sort of thing. For folks who are not dealing with these kind of comma-separated lists every day, it’s a different story.

The routing table is at 18 now. Be good if we could get a few more and trigger a section split.


I’ll be back in late this evening. For now is offline