I’ll check my node tomorrow if memory serves me I should have had 100gb available for the node
remember the logs
The safe cat command for retrieving sample.txt above, finally threw the following error:
Error:
0: NetDataError: Failed to GET Blob: NotEnoughChunks(512, 465)
Same for my cat
:
d:\SN>safe cat safe://hygoyeyybrbpzs1oda7fk9io88z48w657nte39bqnh59cxthkxz3kx9xuw
d1y > 1.png
Error:
0: ←[91mNetDataError: Failed to GET Blob: NotEnoughChunks(3, 2)←[0m
Location:
←[35m/rustc/c8dfcfe046a7680554bf4eb612bad840e7631c4b\library\core\src\result.
rs←[0m:←[35m1897←[0m
Backtrace omitted.
Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
I expect so yeh. What we’d normally do for a testnet is put the initial nodes to 1gb or so (or perhaps less). I recall 100mb got filled up crazy fast (but also our storage db was not compacting so duplicate chunks were taking up space for a long while).
They shouldn’t. Just the elders at first. If you were going beyond a split then you’d need more always-joinable
nodes.
you could try 180
if your PUTs are having a bad time. YMMV though, we haven’t tested PUTs via CLI thoroughly recently. We’re looking at stabilising join/splits atm. Though I imagine we’ll get more into CLI once the release process for api/cli has been nailed down.
so far this morning
my node has received 12gb in traffic and sent 11.5gb in traffic
test image is hanging
safe cat safe://hygoygyyba71wfma7r16hmbm618hz7e9fg8awa65yxyotpjbn711sxwfh6hmy > test.jpg
BegBlagandSteal is still downloading
safe cat safe://hygyy1yybp9edeorgb1wyu9yyoofd9nuz3czc9339kx7gdcumq9fx8r6sj9ey > BegBlagandSteal.mp3
Can any of you get this?
FilesContainer created at: "safe://hyryyryynyiymbwqrjd4y39gn1kmdtatbi8ksc3p8x8em677doatfce4w4b5oeuy?v=hgpaj549c3535ad1kjafsjkkw1kyfka7thrqdmescif4o17ynroxo"
I can cat BegBlagandSteal fast… but the rest is hanging.
I put a new file, but it also wont cat.
hows it going over there?
I’m busy with stuff that should have been done for today. Luckily I have managed to blame someone else for not providing the correct hosting info :->
Meantime I am uploading stuff as fast as poss
Try this
FilesContainer created at: "safe://hyryyryyn8m1guaiwx51q3fps18bff37ynticiha4w51qpmkthky8mfkfdmzoeuy?v=hewfya63t1fy6aqhmtdpnn77b4xfatzpk6eju6odebmbmfdrmsu1y"
@happybeing will recoil in horror but its the files for a new WordPress install
I need to go do real world stuff too, by that I mean drive an hour to go see why the mother in law’s printer wont work
trying to cat that but nothing happening.
This is getting to be a common theme - the inability to cat, no the M-i-L printer not working. Thankfully my M-i-L does very little printing and it normally just works. Bet I get a call soon to sort out dried out ink carts…
When you get time please grep the latest logs for a couple of IPs I will DM you. One is my fixed IP at the house and the other is the AWS instance.
just had another try and uploaded the first cock pick to the network and it worked
anyone who wants a look
safe cat safe://hygoygyyb1aq95ugj4kyp98gz87px3d9becszpo1hbxmxwrgjfsg778rz88go > testcock.jpg
all family friendly and minor suitable !!
Despite network being glitchy, I left my node running for a while.
And now I see that sn_node.exe uses 1.2GB of RAM while doing nothing:
2021-11-15T19:24:02.240365Z TRACE tokio-runtime-worker log{system=System { global CPU usage: 46.329334, load average: LoadAvg { one: 0.9501753658575695, five: 1.1243881205131427, fifteen: 0.9735765438839084 }, total memory: 8461066, free memory: 1776041, total swap: 8587988, free swap: 7312322, nb CPUs: 4, nb network interfaces: 4, nb processes: 96, nb disks: 8, nb components: 1 } print_resources_usage=false}: safe_network::node::logging: Node resource usage: Process { memory: 1192308, virtual_memory: 1768583, cpu_usage: 0.0, disk_usage: DiskUsage { total_written_bytes: 3566385615, written_bytes: 671, total_read_bytes: 2185788, read_bytes: 0 } } prefix="000000.."
I think it is too much.
Why it needs so many RAM?
My node is not even Elder now.
The old foe is running scared but can still be found, taking way longer to find the pest though!
Error: 0: NetDataError: Failed to GET Blob: NoResponse
@neik I can still cat your cock though.
Priceless maybe that could be the network mascot the “safe cock”
For anyone who may be interested, I made several graphs for resource consumption for my node:
RAM (blue), VRAM (yellow):
CPU:
DISK:
Update from my end.
Couldn’t join with a node on my infamous Oracle cloud instance. I was getting a node unreachable error message. I learned that Oracle macchiavellic security rules operate on 3 levels. I won’t bother everybody with the solution to this, but if someone needs it, shout out (hint: instance firewall).
So, managed to join with a node, yay! To me this testnet is invaluable just for that, so I’m ready for the real thing; thanks again @Josh !
On the client side, I’ve successfully executed put and get of small files as reported above, but attempts to put a ~500Mb file consistently failed with a Killed
message, with various SN_CLI_QUERY_TIMEOUT
up to 180.
Currently my node is reporting a lot of An incoming connection failed because of: TimedOut
Edit: other good news, I managed to join with my mac at home, while I had IGD issues in the past, and storing chunks!
Note though, OP instructions currently install node 0.40. Not sure what mixing node versions will cause… we’ll see, I guess