Are we no longer able to safe node join
and walk away until it joins?
I get trying again in 3 minutes
but then after a short while try again later
Are we no longer able to safe node join
and walk away until it joins?
I get trying again in 3 minutes
but then after a short while try again later
Similar to what some of us are seeing @Josh - sometimes you can walk away and leave it, other times it stops. Other times you get an IGD error.
That works though that is for key not wallet - perhaps something has changed and Iāll wonder later.
Currently stuck with Virgin Media Hub 3.0 bug that sets the local IP as 0.0.0.# uneditable and then errors trying port forwardingā¦ waiting on their forum to answer!
The G in IGD is glue?
If I try safe files put somefile it just hangs, even with small files
Try:
safe cat safe://hy8ayqyp5z55pu9a7dur9mntgznm7jtihjk4se1pxqy3qg5gknc8fbgiior > Waterfall_slo_mo.mp4
Dedicated to @Sascha
I confirm hang on second parallel command.
First one (large file upload) is still doing something.
same for me, first upload still going, second hangs. I have the debug log for the second upload here: Debug Log
was just about to ask for such things how big is this @JBildstein ?
If anyone has a repro case of the hang with a small file please grab logs by prefixing your command with log levels, eg: RUST_LOG=sn_client=trace,qp2p=debug safe files put <file>
edit: I can see Cmd Error was received for Message w/ID: 3c866ae5.., sending on error channel
(which should go out to the CLI, but thatās not handled there yet), could you log w/ trace
if you can repro level please.
If weāre getting a CmdError here it may well be the payment is still insufficient (and things are changing faster on the network than Iāve anticipated). Thatās the first thing that comes to mind here, at least.
itās about 384MiB. btw I can see some network activity in the task manager going from a couple hundred B/s to single digit KiB/s
sure, will try with trace enabled
x@x:~$ time safe cat safe://hyryyyyjhxs65mpjyt9ihttd54b67fap7uq6s3cmbrjb5gswb3anwurkxyy > 1.flac
real 0m59.456s
user 0m23.470s
sys 0m12.877s
x@x:~$ time safe cat safe://hy8ayqyp5z55pu9a7dur9mntgznm7jtihjk4se1pxqy3qg5gknc8fbgiior > Waterfall_slo_mo.mp4
real 0m56.420s
user 0m22.831s
sys 0m12.325s
Performance seems pretty solid and consistent from a download perspective. Sustained about 8Mbps.
Late to this party - I am away from home and my cherished fixed IP so about to dive into my mums BT router config
What ports should I forward?
Not sure if 3 minutes is enough for hang:
https://wdfiles.ru/61b33a
Last line is
[sn_client::connections::listeners] TRACE 2021-06-18T17:13:01.834353200 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\sn_client-0.61.1\src\connections\listeners.rs:298] Error received is: Transfer(TransferValidation(InvalidOperation("Failed to perform operation: Transfer(OperationOutOfOrder(323, 324))")))
Perhaps this is a known issue, but putting large files on systems with limited memory available fails with no error message. If the system runs out of memory, it simply kills the put.
In the same vein, there needs to be some way to manage memory so that it isnāt consuming 2x the put file size in memory. It would make it untenable for a lot of people to put any files of a couple gigs or more.
Since upload of 512 MB file takes 6 hours (source), memory limitation is not the biggest issue.
V6 running solidly for me. No errors, just upload put time on larger files seems slower. Also still waiting to node join:
[sn_node] INFO 2021-06-18T08:25:47.628559617-06:00 [src/bin/sn_node.rs:124] The network is not accepting nodes right now. Retrying after 3 minutes
put speeds seem stable:
10mb: 1:50
20mb: 4:34
50Mb: slower at 25:15,
System:
3.7 GiB
IntelĀ® Celeron(R) CPU J3455 @ 1.50GHz Ć 4
Ubuntu 20.04.2 LTS
64-bit
2 hours later and both are downloading nicely.
How long did it take in previous tests for issues to surface? Not going to say āresolvedā to avoid jinxing.
Has anyone managed to join? my node is trying but no luck yet.
Aha, i see. Transfer is hitting an (only oneā¦?)elder out of order and stalling things. I guess this is more likely as we do more transfers for larger files (and its one transfer per chunk atm).
I was looking at this a couple weeks ago, before we had some other fixes in. Hadnt seen it since, so didnāt get it in amongst the other work. But i have some code to update the elders in a branch, so will get that in next week. and then we need to resend the original and we should be good (hopefully).
atm, client is still rather dumb. w/r/t Cmd failures. Weāll get there though!
thanks @Vort ! @JBildstein (same issue both).
Also worth noting jsut how much of the logs (and so elder work) is writing our transfer history. I imagine this slows down nodes a fair bit (a good whack of the memory issue came down to serialisation). We should be able to relatively easily avoid this I think.
Good thing here is that this is client side issues, not really node stuff just node