[Offline] Another day another testnet

I made small script that downloads the container, wait 10 minutes and repeat. So far no data loss and no degradation of performance. It got interrupted when PC decided to go to sleep, I am restarting it now.
Here are the results:
2022-12-16 18:24:17.967133 0:04:47.260544
2022-12-16 18:30:48.541040 0:04:51.758912
2022-12-16 18:45:40.442179 0:04:49.365158
2022-12-16 18:58:55.411146 0:04:55.004277
2022-12-16 19:13:50.598674 0:04:54.213705
2022-12-16 19:28:44.991916 0:04:46.943849
2022-12-16 19:43:32.031828 0:04:48.349665
2022-12-16 19:58:20.565070 0:04:54.313960
2022-12-16 20:13:15.054979 0:05:08.204285
2022-12-16 20:28:23.445997 0:04:47.368553
2022-12-16 20:43:10.991461 0:04:53.386797
2022-12-16 20:58:04.548596 0:04:52.084023
2022-12-16 21:12:56.768380 0:04:46.754340
2022-12-16 21:27:43.711197 0:04:53.455529
2022-12-16 21:42:37.253956 0:05:03.098579
2022-12-16 21:57:40.531017 0:05:09.879333
2022-12-16 22:12:50.556620 0:05:02.206364
2022-12-16 22:27:52.947500 0:04:55.788716
2022-12-16 22:42:48.929014 0:04:52.466976
2022-12-16 22:57:41.567349 0:04:55.696832
2022-12-16 23:12:37.453904 0:04:56.939348
2022-12-16 23:27:34.563813 0:04:50.724739
2022-12-16 23:42:25.465003 0:04:49.351181
2022-12-16 23:57:14.987185 0:04:52.271963
2022-12-17 00:12:07.393762 0:04:57.607407
2022-12-17 00:27:05.137537 0:04:52.753572
2022-12-17 00:41:58.076699 0:04:46.360034
2022-12-17 00:56:44.553062 0:04:48.970486
2022-12-17 01:11:33.706547 0:04:50.023704
2022-12-17 01:26:23.909784 0:04:50.154460
2022-12-17 01:41:14.183010 0:04:49.851482
2022-12-17 01:56:04.214396 0:04:49.977555
2022-12-17 02:10:54.369738 0:09:08.639871
2022-12-17 02:30:03.187307 0:04:52.429108
2022-12-17 02:44:55.811234 0:04:55.155940
2022-12-17 02:59:51.145203 0:04:53.740297
2022-12-17 03:14:45.075803 0:04:48.456453
2022-12-17 03:29:33.633761 0:04:48.344280
2022-12-17 03:44:22.078974 0:04:59.495281
2022-12-17 03:59:21.676963 0:04:49.576492
2022-12-17 04:14:11.438760 0:04:51.775852
2022-12-17 04:29:03.363788 0:04:54.193764

6 Likes
2 Likes

It means ProxyChains-3.1 (http://proxychains.sf.net)

This is the start of actual JPEG.

Looks like ProxyChains appends extra output, which becomes mixed with output from cat.
+1 more reason not to use console for binary transfers.

5 Likes

Thanks, I’ll look at that later!

For now I can say that in a 90 minutes my memory consumption rose from 2.4GB to 3.4GB. Only process running was node waiting to join and System Monitor itself.

Interesting to see what happens when someone manages to join.

4 Likes

Yea, there seems to be a bug there. This is one we never knew about so good test, even for that one.

10 Likes

Yes, this bug may be either specific to “waiting” state or appear during normal mode of operation too.

2 Likes

I suspect we create an endpoint, try and then wait but create a new endpoint to try again. i.e. we are not clearing the first endpoint and holding it. That is the memory hog. I will dig in later on. Heading to the sheepdog trials for an hour or so now, though.

11 Likes

Feel free to completely disregard the below, but I just had a problem with a http2 crate, which might be occurring in the qp2p crate.

Basically the problem is read_exact(2 bytes) combined with a timeout can drop bytes. Let’s say you timeout after 20 milliseconds and 19.9 milliseconds into the wait you receive 1 byte. Instead of waiting until the next cycle to read both bytes, the first byte would be dropped, causing the next frames to be offset by 1 byte. Which is exactly the opposite behaviour a function named read_exact should do (in my opinion).

Because frame fragments are usually written to the pipe together, the above problem only happens very infrequently. (almost impossible to replicate on a lan or locally), I eventually found it when writing a test to simulate lag (small rand wait between each byte).

The qp2p code looks very similar to what I had written for my http2 crate, so I thought I would mention it.

10 Likes

375 files, 3.57 GB successfully uploaded in 365 minutes.
Unfortunately windows command didn’t return the address…

 Measure-Command {safe files put .\test\ -r}


Days              : 0
Hours             : 6
Minutes           : 5
Seconds           : 4
Milliseconds      : 168
Ticks             : 219041682449
TotalDays         : 0.253520465797454
TotalHours        : 6.08449117913889
TotalMinutes      : 365.069470748333
TotalSeconds      : 21904.1682449
TotalMilliseconds : 21904168.2449
3 Likes

Nooooooooooo :confounded:

3 Likes

Yeah… I think I’ll let my node try to join a couple hours, then kill it to reset the memory, then start again.

2 Likes

Very stable too, only few seconds difference in general, except this one outlier. What might it be?

1 Like

I am not sure where current nodes run, but I suppose it is somewhere in Europe and at this time it could be backups/maintenance tasks running on datacenter servers. Could be a lot of things. Unless somebody experienced a problem at that time, I don’t think it is worth investigating. I will keep my script running, we will see if it appears again.

1 Like

my mega upload started failing with the following error the total files uploaded were 1024 files

anyone can check here

safe files ls safe://hyryyryip96f9ojodzifniqn1zcu3wbqnhsiordj7o51jh5do86c596yhqhnra

error it started giving after 1024 files is

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

Error:
   0: NetDataError: Failed to write data on Register: NetworkDataError(TooManyEntries(1024))

Location:
   sn_cli/src/subcommands/files.rs:317

is it just me or does that number look suspicious 1024 ???

2 Likes

I think that’s the limit of files you can add to a container because of the newly imposed limit in the number of updates to a register.

7 Likes

1-0
Maidsafe vs Neik :laughing:

8 Likes

starting again from the top but with out the safe files add will just give them all a separate container.

and there was me thinking I could split my files between all my nodes and copy the same keys to each one and try uploading to the same container from 4 different locations to see what happened :frowning:

this aint over yet :slight_smile:

8 Likes

I guess it is actually 1-1 you won the last round.

10 Likes

89 of randomly generated 9mb files put successfully

safe files get safe://hyryyryt6n7j7nqbp1zhdoipudfhqn191fzz17bo5odozw13isfrikpts3hnra?v=hfprnsq3tbzkr4b5brm1muwtk3cdxq7mp61utwidcnfg8xwq6we8y 9MBRandomFilesSmoothieGR

measured in time:

real    5m12.633s
user    1m26.152s
sys     0m59.135s

I did it and will upload more random files just so the nodes fill up.
will upload 899 more 9MB random files!

get your nodes ready for joining!

4 Likes

I just screenshot the SystemMonitor as I am generally looking at trends rather than absolute values

AskUbuntu suggests

You can use top and while loop to comma separate fields.

$ while read -r a b c d e f g h i j k l; do \
   echo $a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l; done < <(top -b)
4 Likes