Fleming Testnet v6 Release - *OFFLINE*

Are we no longer able to safe node join and walk away until it joins?

I get trying again in 3 minutes but then after a short while try again later

3 Likes

image
:smiley:

6 Likes

Similar to what some of us are seeing @Josh - sometimes you can walk away and leave it, other times it stops. Other times you get an IGD error.

3 Likes

Seems like this might be related to a bug @yogesh is working on

7 Likes

That works though that is for key not wallet - perhaps something has changed and Iā€™ll wonder later.

Currently stuck with Virgin Media Hub 3.0 bug that sets the local IP as 0.0.0.# uneditable and then errors trying port forwardingā€¦ waiting on their forum to answer!
The G in IGD is glue?

2 Likes

If I try safe files put somefile it just hangs, even with small files

1 Like

Try:

safe cat safe://hy8ayqyp5z55pu9a7dur9mntgznm7jtihjk4se1pxqy3qg5gknc8fbgiior > Waterfall_slo_mo.mp4

Dedicated to @Sascha

2 Likes

I confirm hang on second parallel command.
First one (large file upload) is still doing something.

1 Like

same for me, first upload still going, second hangs. I have the debug log for the second upload here: Debug Log

3 Likes

was just about to ask for such things :bowing_man: how big is this @JBildstein ?

If anyone has a repro case of the hang with a small file please grab logs by prefixing your command with log levels, eg: RUST_LOG=sn_client=trace,qp2p=debug safe files put <file>


edit: I can see Cmd Error was received for Message w/ID: 3c866ae5.., sending on error channel (which should go out to the CLI, but thatā€™s not handled there yet), could you log w/ trace if you can repro level please.

If weā€™re getting a CmdError here it may well be the payment is still insufficient (and things are changing faster on the network than Iā€™ve anticipated). Thatā€™s the first thing that comes to mind here, at least.

3 Likes

itā€™s about 384MiB. btw I can see some network activity in the task manager going from a couple hundred B/s to single digit KiB/s

sure, will try with trace enabled

2 Likes
x@x:~$ time safe cat safe://hyryyyyjhxs65mpjyt9ihttd54b67fap7uq6s3cmbrjb5gswb3anwurkxyy > 1.flac

real    0m59.456s
user    0m23.470s
sys     0m12.877s
x@x:~$ time safe cat safe://hy8ayqyp5z55pu9a7dur9mntgznm7jtihjk4se1pxqy3qg5gknc8fbgiior > Waterfall_slo_mo.mp4

real    0m56.420s
user    0m22.831s
sys     0m12.325s

Performance seems pretty solid and consistent from a download perspective. Sustained about 8Mbps.

7 Likes

Late to this party - I am away from home and my cherished fixed IP so about to dive into my mums BT router config
What ports should I forward?

1 Like

Not sure if 3 minutes is enough for hang:
https://wdfiles.ru/61b33a

Last line is
[sn_client::connections::listeners] TRACE 2021-06-18T17:13:01.834353200 [C:\Users\runneradmin\.cargo\registry\src\github.com-1ecc6299db9ec823\sn_client-0.61.1\src\connections\listeners.rs:298] Error received is: Transfer(TransferValidation(InvalidOperation("Failed to perform operation: Transfer(OperationOutOfOrder(323, 324))")))

1 Like

Same file now with RUST_LOG=sn_client=trace,qp2p=debug enabled: Trace Log

2 Likes

Perhaps this is a known issue, but putting large files on systems with limited memory available fails with no error message. If the system runs out of memory, it simply kills the put.

In the same vein, there needs to be some way to manage memory so that it isnā€™t consuming 2x the put file size in memory. It would make it untenable for a lot of people to put any files of a couple gigs or more.

2 Likes

Since upload of 512 MB file takes 6 hours (source), memory limitation is not the biggest issue.

2 Likes

V6 running solidly for me. No errors, just upload put time on larger files seems slower. Also still waiting to node join:

[sn_node] INFO 2021-06-18T08:25:47.628559617-06:00 [src/bin/sn_node.rs:124] The network is not accepting nodes right now. Retrying after 3 minutes

put speeds seem stable:

10mb: 1:50
20mb: 4:34
50Mb: slower at 25:15,

System:

3.7 GiB
IntelĀ® Celeron(R) CPU J3455 @ 1.50GHz Ɨ 4
Ubuntu 20.04.2 LTS
64-bit

8 Likes

2 hours later and both are downloading nicely.
How long did it take in previous tests for issues to surface? Not going to say ā€œresolvedā€ to avoid jinxing.

Has anyone managed to join? my node is trying but no luck yet.

1 Like

Aha, i see. Transfer is hitting an (only oneā€¦?)elder out of order and stalling things. I guess this is more likely as we do more transfers for larger files (and its one transfer per chunk atm).

I was looking at this a couple weeks ago, before we had some other fixes in. Hadnt seen it since, so didnā€™t get it in amongst the other work. But i have some code to update the elders in a branch, so will get that in next week. and then we need to resend the original and we should be good (hopefully).

atm, client is still rather dumb. w/r/t Cmd failures. Weā€™ll get there though!

thanks @Vort ! @JBildstein (same issue both).


Also worth noting jsut how much of the logs (and so elder work) is writing our transfer history. I imagine this slows down nodes a fair bit (a good whack of the memory issue came down to serialisation). We should be able to relatively easily avoid this I think.

Good thing here is that this is client side issues, not really node stuff just node

12 Likes