DataPaymentNet [26/7/23 Testnet] [Offline]

4 Likes

I updated the client to .80.12 and was rerunning my experiment of uploading the same folder of 11 pdf files and it seems to be in a loop or something. No error, but still uploading and downloading data after an hour on the same file in the folder.

3 Likes

Unless we get told specifically otherwise, we should only ever use the versions in the OP for the testnet.
Its just another variable for the team to work out failures and there is ZERO guarantee that there have not been breaking changes introduced in new releases since the testnet versions were announced.

I know, I know, I’m dead keen myself to see what the latest will and will not do but now that there is a certain stability about testnets and we are getting greater participation, overall it just seems a bad idea IMHO to use a different version of any of the binaries.

Play with the cutting edge in your own local testnet.

7 Likes

I don’t know if you are right or wrong, but it doesn’t say not to and expressly says to do it on install … so anyone who comes in later may have a different version.

4 Likes

Sorry it DOES say to stick with the stated versions – from the OP

To be fair, I think the wording in the OP [sh|c]ould be improved to make this more explicit. We dont know if you have found a bug or if it is a version incompatability problem.
Can you repeat the error after running
safeup client --version 0.80.1 ?

5 Likes

There is still a bug in the command, the quotes are missing, they should be there:

$env:SN_LOG = “all”; safenode

Nota bene - I started the node this morning with no problem, now when I try to start it again I get an error:

PS C:\Users\gggg> $env:SN_LOG  =  "all"; safenode
Logging to stdout
Using SN_LOG=all
[2023-07-28T21:35:01.892430Z ERROR safenode] Can't get env var SAFE_PEERS with error NotPresent
[2023-07-28T21:35:01.892536Z WARN safenode] No peers given. As `local-discovery` feature is disabled, we will not be able to connect to the network.
[2023-07-28T21:35:01.892626Z INFO safenode]
Running safenode v0.87.1
========================

2 Likes

You need to set the SAFE_PEERS env var explicitly for each terminal/PowerShell session.


$env:SAFE_PEERS = "/ip4/167.99.195.200/tcp/41805/p2p/12D3KooWP7SR4kWdN2SdeBdnpkxTbk1Buomb5PEjZVaZU3fGkkpK"
2 Likes

I already did repeat this with earlier versions. My report above was with .80.12 - so bug is specific to that version.

2 Likes

Ok so yesterday night I had two windows machine, this time I could easily send a 5mb mp3 to the network with my new generation magic internet money :money_mouth_face: on one machine;
And then 10 minutes later retrieve it almost instantly on another windows machine :fire:
How beautiful.

If anyone want to try to download that, it’s a unnamed song (.mp3) my brother made : ae40069787b8737393ae01e35dab1debc2bab902799c8639f5c5b6ced86035eb

I tried again just now to download it, this time back on my mac.
BTW thank you @happybeing & @Southside you’re right I should have tried that path right away. I did ulimit -n 30000 like suggested here on SO, even restarted the computer. It seemed to help a little, or it’s just a coincidence, but I could almost download the file, and the error happened anyway :

Downloading file “6.mp3” with address ae40069787b8737393ae01e35dab1debc2bab902799c8639f5c5b6ced86035eb
Client download progress 9/9
Successfully got file 6.mp3!
Writing 4607999 bytes to “/Users/x/Library/Application Support/safe/client/6.mp3”
Failed to create file “6.mp3” with error Os { code: 24, kind: Uncategorized, message: “Too many open files” }

Then

Downloading file “6.mp3” with address ae40069787b8737393ae01e35dab1debc2bab902799c8639f5c5b6ced86035eb
Client download progress 8/9
Did not get file “6.mp3” from the network! Chunks error Not all chunks were retrieved, expected 9, retrieved 8, missing [cad4be(11001010)…]…

So yeah I can’t get rid of this “Too many open files (os error 24)” issue. Running out of time to push more for now.

And for upload on mac it just gets stucked forever at the “Making payment for 6 Chunks that belong to 1 file/s.” step. No idea if it’s related to my too many open files error or not. For the team, here is the log message I see over and over, related to that :

[2023-07-29T09:22:48.091174Z TRACE sn_networking::event] Query task QueryId(105) returned with record Key(b"\xe6\xe0\xf3\xc4q\x8b\xab[A\x92Bb@(\xdd\x1c\xb2\xd6\x8c<\xcf\xd6m\xfb]\xed}\x19T\xdf\x91f") from peer Some(PeerId(“12D3KooWJ1nQnssSnPSJSvewtdE264dn29JHVDVLxPortFhTcjQU”)), QueryStats { requests: 7, success: 2, failure: 0, start: Some(Instant { t: 1449329327000 }), end: None } - ProgressStep { count: 1, last: false }
[2023-07-29T09:22:48.091408Z TRACE sn_networking::record_store] GET request for Record key: Key(b"\xe6\xe0\xf3\xc4q\x8b\xab[A\x92Bb@(\xdd\x1c\xb2\xd6\x8c<\xcf\xd6m\xfb]\xed}\x19T\xdf\x91f")
[2023-07-29T09:22:48.091418Z TRACE sn_networking::record_store] Record not found locally
[2023-07-29T09:22:48.109053Z TRACE sn_networking::event] KademliaEvent ignored: OutboundQueryProgressed { id: QueryId(103), result: GetRecord(Ok(FinishedWithNoAdditionalRecord { cache_candidates: {Distance(377603634603690964699810860465431394209475658872846765709416450819272540264): PeerId(“12D3KooWFkFnoxt8xNFP8wBCUtgUSYxw88zuSbwztN4P7KviNvpc”)} })), stats: QueryStats { requests: 50, success: 33, failure: 0, start: Some(Instant { t: 1448956248333 }), end: Some(Instant { t: 1449534093083 }) }, step: ProgressStep { count: 19, last: true } }


Last note : yesterday on the second windows machine I had a faucet request that got stuck indefinitely but apart from that it was perfect :clap: :clap:

12 Likes

I think we actually do make this quite clear. However, do you have any other suggestions?

5 Likes

Got your brother’s mellow groove.

root@localhost:~# safe files download ~/atom.mp3 ae40069787b8737393ae01e35dab1debc2bab902799c8639f5c5b6ced86035eb 
Built with git version: 1262368 / main / 1262368
Instantiating a SAFE client...
⠐ 0/20 initial peers found.                                                                                                                                                                                                                   The client still does not know enough network nodes.
🔗 Connected to the Network                                                                                                                                                                                                                   Downloading file "/root/atom.mp3" with address ae40069787b8737393ae01e35dab1debc2bab902799c8639f5c5b6ced86035eb
Client download progress 9/9
Successfully got file /root/atom.mp3!
Writing 4607999 bytes to "/root/atom.mp3"
8 Likes

I see, I thought the network I set up in the morning was working all the time. The matter is clear.

3 Likes

The wording indicates to me that after you install, then you run the update. But if you came in later and followed those instructions you’d be using different versions from those who started the network - and so we’d end up with mixed network. If that’s desired, IDK, then no worries.

4 Likes

Its nice to see overall memory pretty low compared to prior testnets.

On my node, if you graph out the memory used by safenode pid from the metrics json provided by safenode pid, it seems to be increasing linearly at a pretty constant slow rate on my node, right from the start of the safenode pid:

I know there have been tons of improvements in the past few weeks, out of curiosity, so what is the main or current theory for the slow memory rise, given the following observations:

  • kBucketTable - # of buckets and # peers are relatively stable for hours on end
  • Number of Unique Connected Peers are oscillating within a given range
  • Memory increase doesn’t seem to correlate directly with when chunks are stored
    • See ‘Record Store - File Count’ vs. ‘PID - Memory Used’ panels on top right off dashboard

Is the slow memory rise due to any of the following:

  • Kad components?
  • Due to libp2p components?
  • Number of Unique Peer IDs discovered in general over time?
  • Number of Chunks stored?
  • Something else?

I was expecting the memory to stabilize or oscillate within a certain range during hours of inactivity (no PUTS & GETS), but it continues to increase linearly at a constant rate (very slowly), but the idea that it shouldn’t be increasing might be a false assumption, depending on which components in the current code base, do or do not have an upper cap in code (size / entries limits).

If this testnet was run for days on end, where would this node’s memory end up at?

My apologies if this was already answered in earlier posts here, :slight_smile: .

14 Likes

Its not clear but worth letting this run and see where it gets to I recon

14 Likes

Perhaps an explicit check against the git build version?

willie@gagarin:~$ safe files upload ~/trecem
Built with git version: 1262368 / main / 1262368
Instantiating a SAFE client...

A note in the OP for each testnet to check that this number is consistent, perhaps?

2 Likes

I’ve sent 10 SNT to my wallet, and they’re gone, they’re gone! I feel so punished.

1 Like

Thanks.

The intention of putting that update command there was just to show it’s a capability that safeup has, not that it’s something you’re required to do. That’s why it’s in the ‘Further Information’ section rather than ‘Quickstart’, and it says “you can install binaries like so”, not “you should”, or “have to”.

I think we might just take it out of the template though.

9 Likes

…is the correct answer. :grin:

4 Likes

I note that when doing a safe update it looks to retrieve node, client, & testnet … guessing the latter “testnet” is a config file? Perhaps this could be updated first and specify the versions of node and client that need to be used/pulled from the repo by the update process itself. This would allow updating for each testnet without worrying about using the wrong code perhaps.

1 Like