[Offline] Another day another testnet

It’s because if you use sudo, the binary gets installed into /usr/local/bin, which is a location that is always on the PATH on any Linux or macOS system, and is a very common location for binaries. In fact, it’s fairly rare to install binaries in your home directory. This means that no PATH modifications need to be made, as home directory locations aren’t on PATH by default.

1 Like

Yes… but nice to contain beta software in case of errors… and sudo is really only for sandwiches and important other actions.

1 Like

Not really, it’s very common for installing software.

If you don’t want to use it, that’s fine, you can continue to use the non-sudo option, but that won’t be our recommended way from now on. Having to make no PATH modifications is simpler.

1 Like

Tested with powershell on windows. The downloaded jpg files are corrupted. Even my own uploaded jpg file looks different when opening in notepad. Any ideas?

1 Like

Yes I see similar here

[2022-12-16T18:45:13.914958Z ERROR sn_node::comm] Sending message (msg_id: MsgId(caa3..c9c4)) to 161.35.42.143:44107 (name b62694(10110110)..) failed, as we've reached maximum retries
[2022-12-16T18:45:13.914961Z INFO sn_node::comm::peer_session] Terminating connection to Peer { name: 56e8c3(01010110).., addr: 143.110.168.239:12000 }
[2022-12-16T18:45:13.914964Z INFO sn_node::comm::peer_session] Finished peer session shutdown
[2022-12-16T18:45:13.914967Z TRACE sn_node::comm::peer_session] Processing session Peer { name: b62694(10110110).., addr: 161.35.42.143:44107 } cmd: Terminate
[2022-12-16T18:45:13.914970Z INFO sn_node::comm::peer_session] Terminating connection to Peer { name: b62694(10110110).., addr: 161.35.42.143:44107 }
[2022-12-16T18:45:13.914972Z INFO sn_node::comm::peer_session] Finished peer session shutdown

Still getting this in the console though?
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/

Is it OK for me to safe node killall and try again?

1 Like

Someone experienced this issue on one of the previous testnets, but then concluded it was a setup issue I think. Did we determine that it wasn’t an encoding bug?

3 Likes

Yeah I never quite understood why we did not install to /usr/local/bin from Day 1. I’m sure there were good reasons and its on me for not asking why before.

1 Like

It may be actually that we were just copying the way that the Rust toolchain gets installed. They also use the home directory, and you then need to put their custom location on your PATH. However, they are an exception rather than the rule. Most programs don’t get installed that way.

2 Likes

Some music…


FilesContainer created at: "safe://hyryyryuq63jjb3qp45th5q7oqtnnbu9b3ssbs4czc9bfmusm8n85s9ju5wnra?v=hbbnmkcmmz7gmk1xgb1spzmsgs7tjiqtkur3ui8iq9j891117ikey"
1 Like

Here was the post from the other thread on this encoding issue.

@stout77 You seem to have concluded here that it was something to do with your setup?

1 Like

This seems to be working for me.

So what is the difference to previous ones? Is all the port forwarding stuff unnecessasary now? Routers are expected to just work?

My node is not in though, but it seems to be trying exactly as it should.

3 Likes

if you look at the first part is says it has one open connection We have 1 open connections to node "143.110.168.239:1200093825025644688" then i starts erroring about Cannot connect to the endpoint: Address in use

im a bit confused as i know the port is open its the same set up i use in all previous testnets and comnets

[2022-12-16T19:10:06.934603Z ERROR sn_node::node::bootstrap::join] Network is set to not taking any new joining node, try join later.
[2022-12-16T19:10:06.934760Z ERROR sn_node::comm::link] Error sending out from link... We have 1 open connections to node "142.93.38.111:3557293825025976496".
[2022-12-16T19:10:06.934771Z ERROR sn_node::comm::link] Error sending out from link... We have 1 open connections to node "143.110.168.239:1200093825025644688".
[2022-12-16T19:10:06.934787Z ERROR sn_node::comm::peer_session] the error on send :Send(ConnectionLost(Closed(Local)))
[2022-12-16T19:10:06.934790Z ERROR sn_node::comm::peer_session] the error on send :Send(ConnectionLost(Closed(Local)))
[2022-12-16T19:10:06.934908Z ERROR sn_node::comm::link] Error sending out from link... We have 1 open connections to node "142.93.44.126:3967093825026254192".
[2022-12-16T19:10:06.934946Z ERROR sn_node::comm::link] Error sending out from link... We have 1 open connections to node "161.35.42.143:4410793825026541712".
[2022-12-16T19:10:06.934958Z ERROR sn_node::comm::peer_session] the error on send :Send(ConnectionLost(Closed(Local)))
[2022-12-16T19:10:06.934993Z ERROR sn_node::comm::peer_session] the error on send :Send(ConnectionLost(Closed(Local)))
[2022-12-16T19:10:06.987137Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:06.987638Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:06.988253Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:06.988797Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.038980Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.039670Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.040296Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.041020Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.091522Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.092219Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.093024Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.093751Z ERROR sn_node::comm::peer_session] Error when attempting to send to peer. Job will be reenqueued for another attempt after a small timeout
[2022-12-16T19:10:07.191288Z ERROR sn_node::comm] Sending message (msg_id: MsgId(6597..6761)) to 142.93.44.126:39670 (name 758294(01110101)..) failed, as we've reached maximum retries
[2022-12-16T19:10:07.191352Z ERROR sn_node::comm] Sending message (msg_id: MsgId(6597..6761)) to 161.35.42.143:44107 (name b62694(10110110)..) failed, as we've reached maximum retries
[2022-12-16T19:10:07.191374Z ERROR sn_node::comm] Sending message (msg_id: MsgId(6597..6761)) to 143.110.168.239:12000 (name 56e8c3(01010110)..) failed, as we've reached maximum retries
[2022-12-16T19:10:07.191428Z ERROR sn_node::comm] Sending message (msg_id: MsgId(6597..6761)) to 142.93.38.111:35572 (name 38608b(00111000)..) failed, as we've reached maximum retries
[2022-12-16T19:10:36.937119Z ERROR sn_node] Err(
   0: ^[[91mCannot connect to the endpoint: Address in use (os error 98)^[[0m
   1: ^[[91mAddress in use (os error 98)^[[0m

Location:
   ^[[35m/rustc/69f9c33d71c871fc16ac445211281c6e7a340943/library/core/src/convert/mod.rs^[[0m:^[[35m726^[[0m

^[[96mSuggestion^[[0m: If this is the first node on the network pass the local address to be used using --first
   Cannot start node. Node log path: /home/ubuntu/.safe/node/local_node

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.)
[2022-12-16T19:10:37.940095Z ERROR sn_node] Err(
   0: ^[[91mCannot connect to the endpoint: Address in use (os error 98)^[[0m
   1: ^[[91mAddress in use (os error 98)^[[0m

1 Like

Actually tried to do this, but terraform or digital ocean balked at the convention i tried and it was getting late. So aye, next one should be named something more interesting! (as @chriso already said apparently!)


@neik I’m surprised you could join at all yet… there maybe a bug in the join process and it’s ended up trying to restart the ndoe, but failed to close the prior endpoint, perhaps. Try restarting the node fully. I wonder if it’s related to the no-igd world issues we might have seen yesterday :thinking:


So far node cpu/mem is looking healthy across the board. Nodes are reporting stored data as:

node-17: 24K    total
node-10: 24K    total
node-4: 24K     total
node-1: 32K     total
node-7: 24K     total
node-12: 24K    total
node-16: 1.1G   total
node-13: 735M   total
node-19: 766M   total
node-2: 1.4G    total
node-6: 1.2G    total
node-8: 1.3G    total
node-3: 1.4G    total
node-9: 1.3G    total
node-15: 1.4G   total
node-20: 1.4G   total
node-18: 2.9G   total
node-5: 2.9G    total
node-21: 2.9G   total```
8 Likes

To add to the weirdness…

the console says …

willie@gagarin:~$ RUST_LOG=sn_node=trace safe node join  --network-name public  --skip-auto-port-forwarding 
Creating '/home/willie/.safe/node/local-node' folder
Storing nodes' generated data at /home/willie/.safe/node/local-node
Starting a node to join a Safe network...
Starting logging to directory: "/home/willie/.safe/node/local-node/"
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
willie@gagarin:~$ The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/
The network is not accepting nodes right now. Retrying after 30 seconds. Node log path: /home/willie/.safe/node/local-node/

THe log seems to suggest it actually did join but is now failing…

send_stream: None })
[2022-12-16T19:15:23.605380Z TRACE sn_node::comm::peer_session] Sending to peer over connection: MsgId(dc8c..a865)
[2022-12-16T19:15:23.605381Z DEBUG sn_node::comm::peer_session] max retries reached... MsgId(dc8c..a865)
[2022-12-16T19:15:23.606445Z TRACE sn_node::comm::peer_session] Processing session Peer { name: a1dcf8(10100001).., addr: 104.248.167.4:43678 } cmd: Send(SendJob { msg_id: MsgId(dc8c..a865), connection_retries: 4, reporter: StatusReporting { sender: Sender { shared: Shared { value: RwLock { data: TransientError("Connection(Closed(Local))"), poisoned: false, .. }, state: AtomicState(8), ref_count_rx: 1, notify_rx: Notify { state: 16, waiters: Mutex(Mutex { data: LinkedList { head: None, tail: None }, poisoned: false, .. }) }, notify_tx: Notify { state: 0, waiters: Mutex(Mutex { data: LinkedList { head: None, tail: None }, poisoned: false, .. }) } } } }, send_stream: None })
[2022-12-16T19:15:23.606455Z TRACE sn_node::comm::peer_session] Sending to peer over connection: MsgId(dc8c..a865)
[2022-12-16T19:15:23.606456Z DEBUG sn_node::comm::peer_session] max retries reached... MsgId(dc8c..a865)
[2022-12-16T19:15:23.655822Z ERROR sn_node::comm] Sending message (msg_id: MsgId(dc8c..a865)) to 104.248.167.4:43678 (name a1dcf8(10100001)..) failed, as we've reached maximum retries
[2022-12-16T19:15:23.655835Z TRACE sn_node::comm::peer_session] Processing session Peer { name: a1dcf8(10100001).., addr: 104.248.167.4:43678 } cmd: Terminate
[2022-12-16T19:15:23.655832Z ERROR sn_node::comm] Sending message (msg_id: MsgId(dc8c..a865)) to 161.35.42.143:44107 (name b62694(10110110)..) failed, as we've reached maximum retries
[2022-12-16T19:15:23.655837Z INFO sn_node::comm::peer_session] Terminating connection to Peer { name: a1dcf8(10100001).., addr: 104.248.167.4:43678 }
[2022-12-16T19:15:23.655839Z INFO sn_node::comm::peer_session] Finished peer session shutdown
[2022-12-16T19:15:23.655849Z TRACE sn_node::comm::peer_session] Processing session Peer { name: b62694(10110110).., addr: 161.35.42.143:44107 } cmd: Terminate
[2022-12-16T19:15:23.655854Z INFO sn_node::comm::peer_session] Terminating connection to Peer { name: b62694(10110110).., addr: 161.35.42.143:44107 }
[2022-12-16T19:15:23.655859Z INFO sn_node::comm::peer_session] Finished peer session shutdown

Full log here:

3 Likes

iv been retried several times so im giving up on nodes at the moment

a wee put went up fine

safe cat safe://hygoygyerg1azika5pwdwik5zu9eifku8dkstbfhxto3pjq7bj45et19ydy > tuk.jpg

and have started 40Gb of files that are less than 10mb here

safe files ls safe://hyryyryip96f9ojodzifniqn1zcu3wbqnhsiordj7o51jh5do86c596yhqhnra
3 Likes

So it’s safe to leave joining for tomorrow, I guess.

Only 6 nodes with kilobytes of data… Is one of the elders storing more?

1 Like

tried a full cold restart of the node pc and got the same results think it tries the first connection then leaves the port in use as you suggested

1 Like

cearc!!! beautifully onomatopoeic

3 Likes

When I try running node without specifying address (which never worked for me) I get same result as Southside.

When I try running node with --locall-adr and --public-addr (which always worked for me) I get “address in use” result as Neik.

3 Likes

Just wondering out loud here…

What would happen if you tried again with an explicit public-addr with a port that is NOT 12000?

Edit: scratch that idea - same as before


Running sn_node v0.72.2
=======================
[2022-12-16T19:42:32.066868Z ERROR sn_node] Err(
   0: Cannot connect to the endpoint: Address in use (os error 98)
   1: Address in use (os error 98)
3 Likes