that’s 11 nodes _per machine. Not just left in total if that’s what you thought?
11/20*100 = 55 percent of nodes left (from maid)
that’s 11 nodes _per machine. Not just left in total if that’s what you thought?
11/20*100 = 55 percent of nodes left (from maid)
Ahh, thanks for the clarification. Impressive nonetheless ![]()
I read 11 droplets lefts of 100 original.
And that’s what you call a result.
![]()
Really great work gentlemen! Hats off to the team and the testers!

Whats happening here, very quiet, I have been out of poking distance.
All data still holding tight?
Going to check my node shortly.
I’m on the worst Internet connection possible and can’t get involved ![]()
Can you share any results of how it’s going if you have a chance to poke around?
Seems like it’s still going - from a node operator perspective at least. I haven’t tried uploading or downloading anything for a while
I killed my DO node after it stopped receiving chunks. (bad move?)
My node from home is full!! ![]()
ls -1 record_store | wc -l 2048
du -sh record_store 476M record_store
It seems to be chugging along nicely.
Will see whats up data wise in a bit.
Getting all data I tried successfully starting with the first upload begblag.
This is AWESOME progress! The SAFE network is so near I can taste it and it is delicious! ![]()
josh@pc1:~$ safe files download begblag.mp3 3eb0873bd425e5599d72c2873ca6e691d5de5c75bbc89f2e088fa95e3390927a
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Built with git version: 6ed7662 / main / 6ed7662
Instantiating a SAFE client...
🔗 Connected to the Network Downloading file "begblag.mp3" with address 3eb0873bd425e5599d72c2873ca6e691d5de5c75bbc89f2e088fa95e3390927a
Successfully got file begblag.mp3!
Writing 2499 bytes to "/home/josh/.safe/client/begblag.mp3"
josh@pc1:~$ safe files download scotcoin.bom 2666dd20e19f197bc42c697aef7c525fd7caf535427c506a80d5b6188ccbe1a3
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Built with git version: 6ed7662 / main / 6ed7662
Instantiating a SAFE client...
🔗 Connected to the Network Downloading file "scotcoin.bom" with address 2666dd20e19f197bc42c697aef7c525fd7caf535427c506a80d5b6188ccbe1a3
Successfully got file scotcoin.bom!
Writing 2233 bytes to "/home/josh/.safe/client/scotcoin.bom"
josh@pc1:~$ safe files download gitignore 8a794264ebcd148e2b08c5091916fd86e2797ab22c9206c646fde011e049745c
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Built with git version: 6ed7662 / main / 6ed7662
Instantiating a SAFE client...
🔗 Connected to the Network Downloading file "gitignore" with address 8a794264ebcd148e2b08c5091916fd86e2797ab22c9206c646fde011e049745c
Successfully got file gitignore!
Writing 28 bytes to "/home/josh/.safe/client/gitignore"
josh@pc1:~$ safe files download slow-blow-debug.txt 80d6abd59b44971972899968ab52337602a3f61babf65fdae2c0115bfb289267
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Built with git version: 6ed7662 / main / 6ed7662
Instantiating a SAFE client...
🔗 Connected to the Network Downloading file "slow-blow-debug.txt" with address 80d6abd59b44971972899968ab52337602a3f61babf65fdae2c0115bfb289267
Successfully got file slow-blow-debug.txt!
Writing 1539 bytes to "/home/josh/.safe/client/slow-blow-debug.txt"
I am not having immediate success with uploading data, could all nodes be full @joshuef?
Did not store file "15MB.jpg" to all nodes in the close group! Network Error Outbound Error.
Writing 8 bytes to "/home/josh/.safe/client/uploaded_files/file_names_2023-07-17_11-16-55"
If so I will add a few more and see if it remedies the situation.
I managed to upload and download a 215K text file an hour ago. Took a while though.
I’m not seeing nodes full, but we are now down to 8/droplet!
Thanks for checking!
I am not understanding this, if it is memory related should the entire droplet not fail. ie all nodes on the droplet as the droplet ran out of memory?
Or does a node dying release memory for all remaining and so on until there is only one… then none.
Very likely and I am so keen to get to QUIC here, it will do so much for us. Mem is one for sure.
Aye exactly that, the OS kills a high mem process and the others are free to continue.
It’s spikes we see that might be assuaged by more mem on the machines, but looks like we’re misusing a kad event here and triggering a lot of traffic. There’s a PR that I think gets us in a good place for another wee testnet to see about stability there.
We’re still waiting on some libp2p fixes to further improve things, but we’ve made some dents in there ourselves too.
FYIs, I’m thinking I’ll retire our nodes in this network soon enough. It’s done very well, and we’ve tackled a bunch of things this has raised w/r/t stability. So thanks everyone for participating! ![]()
I understand this in not a priority, but am I right thinking at some point we’ll have a system background process connecting the client to nodes, so that this doesn’t have to happen every time we upload or download files?
What’s this in this?
The fact the the client needs to connect to 20 peers before uploading/downloading a file. Couldn’t that be done once (and maintained/updated) in the background by a system process?
Ah right. Yeh the knowledge could be kept somewhere and reused. That’s no guarantee that they’d all still work, so a total of 20 will be needed, but it should be sped up a good bit.
What about a daemon/service running in the background, keeping up all those connections?