Woke up with with just enough time to get to the airport, this test is a test-my-priorities-net!!
yep worked for me…
Wonder why we don’t see download times…
It’s running really fast again. I love that "new network smell’ Quite the opposite of my actual car smell once my kids have been in there. It’s like a drivable wheelie bin.
No peer supplied in OP for safenode-manager
It looks like I’m connected using a local node with port forwarding setup!
I’m having trouble uploading anything atm, mind.
EDIT: I deleted my safe client and re-installed it, hit faucet again, then the uploads started working. I could have had a stale version/config.
My uploads are going smoothly so far. I use batch-size 5 currently.
Watch out.
I uploaded a folder with eight files and, without requesting it, tried to download the files in the same folder.
Since the download failed on six of them, I now have most of the files in that directory corrupted.
Attestation of priorities
3.2GB
Among 6259 chunks, found 0 already existed in network, uploaded the leftover 6259 chunks in 50 minutes 29 seconds
-
Payment Details *
Made payment of 0.000464053 for 6259 chunks
Made payment of 0.000078918 for royalties fees
Safe CLI should ask user if files should be replaced.
But overall I agree with such change.
No need to search all over HDD where files were landed.
quick thoughts nodes are joining nicely with 11 second gap records and earnings are coming in
also nodes don’t appear to be so chatty my bandwidth usage is a lot lower than before
AWESOME proxy you wrote there!!!
If you tail safenode.log you should see lots of messages. They will be something like:-
[2024-03-21T10:25:18.738720Z TRACE sn_networking::event] ConnectionEstablished (ConnectionId(3708)): incoming (/ip4/165.232.41.214/udp/37079/quic-v1) peer_id=12D3KooWDCoRHL1MpETF5hF4VabiKcdmkHQXYtxMg97Wz8BSh8Uq num_established=1
[2024-03-21T10:25:18.900701Z TRACE sn_networking::event] identify: received info peer_id=12D3KooWDCoRHL1MpETF5hF4VabiKcdmkHQXYtxMg97Wz8BSh8Uq info=Info { public_key: PublicKey { publickey: Ed25519(PublicKey(compressed): 325263bbb912f0447398228f295c758b11bbc1e2dba79479895d4bbf67aa9a) }, protocol_version: "safe/0.13", agent_version: "safe/node/0.13", listen_addrs: ["/ip4/165.232.41.214/udp/37079/quic-v1", "/ip4/10.16.0.31/udp/37079/quic-v1", "/ip4/10.131.0.28/udp/37079/quic-v1", "/ip4/127.0.0.1/udp/37079/quic-v1", "/ip4/165.232.41.214/udp/37079/quic-v1/p2p/12D3KooWDCoRHL1MpETF5hF4VabiKcdmkHQXYtxMg97Wz8BSh8Uq"], protocols: ["/ipfs/kad/1.0.0", "/meshsub/1.0.0", "/safe/node/0.13", "/meshsub/1.1.0", "/ipfs/id/1.0.0", "/ipfs/id/push/1.0.0"], observed_addr: "/ip4/44.214.100.193/udp/12001/quic-v1/p2p/12D3KooWE4gNLSEHjKrWfhNb4adk8U3cNJmPHDrqoShQ3xfyrgwH" }
[2024-03-21T10:25:19.292072Z TRACE sn_networking::event] Received request InboundRequestId(1995) from peer PeerId("12D3KooWDCoRHL1MpETF5hF4VabiKcdmkHQXYtxMg97Wz8BSh8Uq"), req: Query(CheckNodeInProblem(NetworkAddress::PeerId(12D3KooWHdskb71DgaRmHenuPkBCp6y389TpgoBXYaymcwNAr4aS - dc86890bbcd3ba26b0ce481864b394d93ce166776c03c76e0d888346ae0b5183)))
[2024-03-21T10:25:19.292412Z TRACE sn_node::node] Network handling statistics, Event "QueryRequestReceived" handled in 33.116µs : "NetworkEvent::QueryRequestReceived(CheckNodeInProblem(NetworkAddress::PeerId(12D3KooWHdskb71DgaRmHenuPkBCp6y389TpgoBXYaymcwNAr4aS - dc86890bbcd3ba26b0ce481864b394d93ce166776c03c76e0d888346ae0b5183)))"
If your node has been stopped there will not be messages like this and you may find one later saying something like ‘stopped’. Can’t remember exactly what.
Or rename to file1-1, file2-1, etc.
I took a very cautious approach just in case, started only few, I am not going to be home and the solution to slow internet is unplug everything. So trying to avoid that.
You going old school I presume, not node-manager?
yea iv gone old school I was waiting for the all clear on node ranges is that already in node manager?
I remember you were building node manager so was not sure where we were at with that.
81 minutes to upload 4,2G, 13 of which were used for chunking (I guess expected in a container capped at 1,5 cpu’s and 800m of memory, inside of a raspberry pi 4).
Around 68 minutes of pure upload for that size gives a speed of around 8 or 9 Mbps, having a 1Gbps symmetric fiber connection at home, doesn’t seem to be the fastest…
Edit: that container is also running 10 nodes.
root@6654ad5899fd:~# time safe files upload -p openSUSE-Tumbleweed-DVD-x86_64-Current.iso
Logging to directory: "/root/.local/share/safe/client/logs/log_2024-03-21_10-10-26"
Built with git version: 01c2e57 / main / 01c2e57
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 49 peers
🔗 Connected to the Network Chunking 1 files...
⠁ [00:00:00] [----------------------------------------] 0/8532 "openSUSE-Tumbleweed-DVD-x86_64-Current.iso" will be made public and linkable
Splitting and uploading "openSUSE-Tumbleweed-DVD-x86_64-Current.iso" into 8532 chunks
⠠ [01:08:26] [#######################################>] 8528/8532 Retrying failed chunks 2 ...
⠄ [01:08:35] [#######################################>] 8531/8532 Retrying failed chunks 0 ...
**************************************
* Uploaded Files *
**************************************
"openSUSE-Tumbleweed-DVD-x86_64-Current.iso" e2c0cc42c3f5a8c0e5f2db5bd95435433481920a748bc13fb98901caff4018c9
Among 8532 chunks, found 0 already existed in network, uploaded the leftover 8532 chunks in 68 minutes 35 seconds
**************************************
* Payment Details *
**************************************
Made payment of 0.000732165 for 8532 chunks
Made payment of 0.000125187 for royalties fees
New wallet balance: 199.999133685
real 81m53.439s
user 54m4.055s
sys 25m5.577s
root@6654ad5899fd:~# ls -lh openSUSE-Tumbleweed-DVD-x86_64-Current.iso
-rw-r--r-- 1 root root 4.2G Mar 20 06:24 openSUSE-Tumbleweed-DVD-x86_64-Current.iso
Yes, I am not sure if it is in main yet but Chris’s branch works for port range.
We need a peer in the op for manager though so I left one machine open for the manager once I can pull a peer from my other nodes.