I had enabled it on the safenode pid via CLI already, but without the proper safenode_rpc_client client binary (readily downloadable), I couldn’t communicate with the safenode pid via RPC properly (I believe).
For some reason, I only saw the following binaries available to build: faucet, safe, safenode, and testnet. I am not seeing safenode_rpc_client show up in the examples directory when building safenode locally using joshnet.zip source code.
I wanted to at the very least use the safenode_rpc_client to get info about the node (peer ID and its peers its currently connected to (was curious how big does this list off connected peers becomes over time or if there is an upper max here).
I might be missing something obvious or this specific binary is still a work in progress. .
Thanks for everyone digging in so far! Great to see what we should put some focus onto (reviving @chriso’s safeup seems like a good starting point).
I’m curious how many folks got nodes connected, and how many tried but failed for some reason. If folk could reply with that, would be good to get a read
Nodes are likely failing as the maidsafe ones are full. I can however still get data I put up yesterday. I suspect we need more relevant errors to clarify if storage roof was hit or other issues.
It looks like no one answered question about how to properly check if node is running correctly.
My node is doing something but I’m not sure if it is fine.
36,7kB jpg was succesful, with Successfully stored file message.
1.9MB jpg was unsuccesfull with Did not store file "kuvaaa.jpg" to all nodes in the close group! Network Error Outbound Error.
226,7kB jpg didn’t give message one way or the other, just: Starting logging to directory: "/tmp/safe-client.log" Instantiating a SAFE client... Writing 8 bytes to "/home/topi/.safe/client/uploaded_files/file_names_2023-05-05_09-51-14"
EDIT:
Repeated the above 3 times, same results everytime. With the 226,7kB file the CLI does not even print out Storing file... part, that it does with smaller and larger file.
I have three on a Linode (I got an email overnight saying CPU and traffic were exceeding limits) and I two running on a machine behind a home router.
Quick question - it seems to be possible to run multiple nodes from the same binary, changing the log folder name. Is that the best way to do it, or is it better to rename the binary each time or move it to another directory?
By the way, my Activity monitor says about safenode, that disk write total is 326,4MB at the moment. But the log file folder is only 78MB. If all the chunks are in the RAM, what does it write to disk then? And where? There is about 250MB missing.
I’m uploading 1000 1KB files and I’m still getting this error:-
Did not store file "1KB_550" to all nodes in the close group! Network Error Outbound Error.
about 1 in 10 times.
I’ve not seen anyone else mention this and maybe it’s something to do with my network but I doubt it because I’m not at home and VPNing to home to run this so it is probably fine.
I’ve just uploaded 100 1MB files with the same kind of thing - about 1 in 10 say:-
Did not store file "1MB_29" to all nodes in the close group! Network Error Outbound Error.
It took 20 mins.
So there there seems to be nothing wrong with uploading when it happens and the rate was good. My router was showing peaks of 20Mb/s. But some of the uploads just fail.
Interestingly the uploads of the individual 1MB files didn’t take much longer than the uploads of the individual 1KB files so quite a bit of the time is taken up with establishing the connection. Maybe not surprising!
In case there is something wrong with my network I’ll try both these tests from an AWS Instance as well.
It normally happen when the target file is not got read-in properly.
For example, if the suffix forgot to be attached (writing test.jpeg as test), the file won’t be read-in, and you will see that kind of output (Writing 8 bytes to)
The client log file shall give more error info regarding this.
As explained at
The current client check of upload success is bit strict. And given nodes now having more & more data loaded, the chance of hit a failure with small file uploading could be increasing.