NoEncryptionNet [07/02/24 Testnet] [Offline]

I don’t know how long Josh would intend to keep it on for, and he’s signed out for the day. Sorry about that.

3 Likes

Hi Chris, could please you take a stab at answering the question about @peca and I only having ~2 log files per day.

If that is correct then the 50 log request is 25 days worth, something is amiss.

2 Likes

Hey, sure, I’ll come back shortly and see if I can understand the issue.

3 Likes

Right, so what you think is there should have been more log output than there is? I notice these logs are at DEBUG level rather than TRACE. I’m not sure perhaps if that’s the reason there aren’t as many? Do we have some kind of base comparison to make? Did the previous testnet produce much more output for DEBUG level logging?

I saw Qi had made a post on Slack this morning referring to the logs, and he didn’t note any issues with lack of information.

3 Likes

Perhaps it is log level as you suggest.

I think Qi previously said 10 logs would roughly cover a day and the request for keeping 50 seemed in line with that number.

Based on that, I assumed that something is not right.

Thanks for looking Chris!

3 Likes

No worries! Perhaps @qi_ma could follow up on this when you are next around just to confirm please? Are the amount of logs generated as you would have expected?

2 Likes

the 10 logs per day estimation is bit outdated :grinning:

The launch post of NoEncryptionNet says the node is on the version of 0.103.45,
which unfortunately doesn’t contain the commit of make node logging at trace level by default.
Meanwhile, our droplet nodes have been re-launched to utilize that commit and logging at trace.

With the info level logging, there does not much log will be generated, and 2 log files per day is normal
(as we have carried out certain clean up work to reduce logging at that level, for the purpose of allowing our droplet sustain more nodes with long run.
So, down from 10 log files per day a lot)

The request of retaining 50 logs is based on logging at trace level, which unfortunately becomes a mis-request, due to the missing commit.
sorry for that.

8 Likes

As Joshuef mentioned, we do spot some issue with the max_records cap and potential memory over-usage.
Joshuef planned to use upgrade to replace droplet nodes with fixes, however encountered some issues as Chris mentioned.

So, yeah this testnet may not last long and will be torn down and restarted soon.

7 Likes

If the testnet is looking dead in the water and unupgradeable we can bring that down @chriso. It’s served its purpose just now.

6 Likes

OK cool, yeah, we can’t upgrade this one I’m afraid. I’ll bring it down then.

Edit: this testnet is now down.

6 Likes

Just a final report on my download woes. With default batch size my ability to download was poor, with even small files failing. When setting at batch-size 8 the small files were all recovered, but the medium sized ones failed half the time. With batch-size 4 I pulled all but the large 2GB file (although I did get some of it).

So all up, at least for my slow connection, I believe that a lower batch-size while downloading makes a really significant difference.

5 Likes

$ safe files upload ‘/media/testnet/TEST FILES/Uploadfiles’
Logging to directory: “/home/testnet/.local/share/safe/client/logs/log_2024-02-08_12-18-34”
Built with git version: db930fd / main / db930fd
Instantiating a SAFE client…
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 97 peers
:link: Connected to the Network Starting to chunk “/media/testnet/TEST FILES/Uploadfiles” now.
Chunking 244 files…
⠁ [00:00:00] [----------------------------------------] 0/24892 Uploading 24892 chunks
⠠ [08:59:45] [#####################################>–] 23361/24892 Retrying failed chunks 1531 …
⠠ [12:06:32] [######################################>-] 24024/24892 Retrying failed chunks 868 …
⠐ [14:29:04] [######################################>-] 24184/24892 Retrying failed chunks 708 …
⠤ [16:27:43] [#######################################>] 24298/24892 Retrying failed chunks 594 …
⠒ [18:09:34] [#######################################>] 24375/24892 Retrying failed chunks 517 …
⠚ [19:29:33] [#######################################>] 24406/24892 Upload terminated due to un-recoverable error Err(SequentialUploadPaymentError)
Error:
0: Failed to upload chunk batch: Too many sequential upload payment failures

Location:
sn_cli/src/subcommands/files/mod.rs:415

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

@Knosis the network is no longer up. See Chris post above. Probably some community nodes still up hence you can connect but they look jam packed full.

3 Likes

This upload was running when I went to bed. Thanks for the heads up. I got the same error when I uploaded when it was still up. I even tried deleting all safenet software and reinstalling. Still same error.

I’ll try again on the next one.

5 Likes

Yes, I found uploads were great with this testnet with a batch size of 40 and 80, and downloads slow at default batch size & unable to function at higher batch sizes.

Hope that can be sorted in the future, as previously I seem to remember uploads being slow and downloads being able to max out my connection.

Great work team and testers… progress is rapid!

6 Likes

@qi_ma

just in case this is of any interest I hade one node that was spiking up and down to 7GB mem usage.
looks like sharks teeth in the pic was the only one displaying this behaviour.

http://safe-logs.ddns.net/12D3KooWG2m6f3exegAbYX8fCVJDNBkVr9pbxF8k7ev4JFfoCTe7/

7 Likes

Thanks Qi, all good, my default assumption is always that I screwed up so just eliminating that. :+1:

4 Likes

I think this impression is more likely because our previous upload is really poor :sweat_smile:

WOW, that curve is really impressive, spike up so high and drop so sharp.
Thx for the share and will have a double check what’s happening.

9 Likes

I’m not so sure about that. Downloads for me in the past few testnets - at least for the small files, were less error-prone. I’m not sure that they were faster in the past, but I didn’t have to specify a small batch-size to get them to download. That’s my impression anyway.

3 Likes

Similar for me. In the past Downloads generally worked better than this test, it was uploads that gave problems.

4 Likes