Pre-Dev-Update Thread! Yay! :D

@StephenC Maybe if people are wanting the weekly thursday (friday for me) update then just a combination of the mini updates could be made into the progress made through the week without each person/section doing a update.

In essence a summary of the mini updates in a way that flows. The mini updates are great and even just inserting the weeks mini updates into one weekly update would prob suit a number of people. But the combining of them to flow better would work well. That way just one person has a hour task at hand. Not suggesting anyone in particular, but I know one person who did the mini updates :slight_smile: and prob easier for them.


Thanks all.

Question - are you guys expecting a testnet next week? Provided no major bugs appear? As of now, It looks very likely it seems?



yes! stephen said that thursday or friday we would see a testnet but it was postponed for next week so it is very likely we have a testnet next week!


I vote no to this proposal, much rather have the team doing more productive things


Me too. We’re so far beyond weekly updates. Now anyone of us can get our hands dirty in the actual network, and submit our own updates, so to speak.

Its show and tell, in real time.

For those who have no inclination to do some testing, let me break it down for you, Update: wait for the next test net, then read the test net thread, its eye-opening, and everything you need to know will be revealed within.

This train is on the right track, and moving fast, cheers!


I think what we have is fine and we just need to get used to checking Updates from MaidSafe HQ rather than a traditional update. Perhaps those can be announced on twitter to make them more accessible.


Ok troops - it looks like we will not see a testnet before next Tuesday at the earliest.

So until then, those of us who want to continue testing and working on our own projects could help by reporting exactly which combos of safe CLI, Node and auth they have any success with.

Cos Im struggling even with safe 0.26.0 and node 0.36.0. Also I am unsure for which versions, we still need auth. I get a lot of “unable to bootstrap to the network” messages which I should report in a more structured manner.


No idea who you mean! Lol, no, good points from everyone above. Maybe I can make the HQ updates a little more substantial - it all depends on the update really as this last 2 weeks really has been simply 3 things - AE, OOM and DBCs. There’s only so many times I can reword roughly the same updates for them so I feel like I’m being a bit repetitive sometimes. In saying that there was the update yesterday with the good progress with OOM in particular, but for me to summarise the week’s progress for an update today would basically just be a re-hash of that, plus that we’ve had to slow down because of a few people being off.

As @dreamerchris says, yes that’s certainly our target. Usual disclaimers apply as always, but we think we’re on course to release next week. We’re not going to be able to start internal testing today now, so all depends on what that shows up. Hopefully anything that does come up we can tackle in the course of next week and push out v6.


The --for-cli flag was added here so that would have been released in v0.23.3. You shouldn’t need to install or use auth for anything from that release onwards.
There’s been quite a few updates to sn_node and sn_client (sn_messaging version changes) over the last couple of weeks. The sn_client ones have not been reflected in sn_api / sn_cli so I would go back to cli v0.26.0, which I think should be compatible with node v0.42.7 or v0.42.6. I think that’s the v5 releases, so was the last time we checked them all to make sure they were in sync and working (we’ll bring cli/api in sync again once client changes have settled down).


OK I will load up these versions and report back later - thanks @StephenC

1 Like

OK so I did this…

willie@gagarin:~/projects$ safe -V
sn_cli 0.26.0
willie@gagarin:~/projects$ safe node bin-version
sn_node 0.42.6
willie@gagarin:~/projects$ safe node killall
Error: Failed to stop nodes (sn_node) processes: sn_node: no process found

willie@gagarin:~/projects$ safe node run-baby-fleming
Storing nodes' generated data at /home/willie/.safe/node/baby-fleming-nodes
Launching local Safe network...
Launching with node executable from: /home/willie/.safe/node/sn_node
Version: sn_node 0.42.6
Network size: 11 nodes
Using RUST_LOG 'sn_node=debug'
Launching genesis node (#1)...
Connection info directory: /home/willie/.safe/node/node_connection_info.config
Genesis node contact info: ["","","","","","","","","","",""]
Common node args for launching the network: ["-vv", "--idle-timeout-msec", "5500", "--keep-alive-interval-msec", "4000"]
Launching node #2...
Launching node #3...
Launching node #4...
Launching node #5...
Launching node #6...
Launching node #7...
Launching node #8...
Launching node #9...
Launching node #10...
Launching node #11...

and I saw this in another terminal

willie@gagarin:~$ tail -f ~/.safe/node/baby-fleming-nodes/sn-node-genesis/sn_node.log 
[sn_node] INFO 2021-05-27T15:08:32.325082943+01:00 [src/bin/] 

Running sn_node v0.42.6
[sn_node] ERROR 2021-05-27T15:08:32.385751840+01:00 [src/bin/] Cannot start node due to error: Logic("Config error: Invalid Ed25519 public key bytes"). If this is the first node on the network pass the local address to be used using --first. Exiting
1 Like

You may wanna make sure to clean up ~/.safe/node/baby-fleming-nodes folder as rewards keys format was changed between that node and later versions.


Thank you @bochaco
I can create a new Safekey and put files to baby-fleming with sn_node v0.42.6 and CLI 0.26.0 once I deleted ~/.safe/node/baby-fleming-nodes


Accidentally posted in the update thread before I realized that’s no allowed, so I deleted it there but I have to say…

This is why I love this project despite the long wait and the complete refactorings that threw out year’s of work. From the start it wasn’t even about crypto payments, but yet they could end up doing that better than anyone else, and without the environmentally unfriendly electricity cost. For something that was just recently announced as an addition to the main functionality this is really amazing.


Good work team. Much to be excited about!


:blush: :smiley: :grinning: I’m smiling a lot these days.


Fantastic update, thank you all. One Q about the 19,000 tps - does this figure need to cover PUT payments so: does 19,000 = PUT payments per sec + other p2p payments per sec? Or is the 19,000 tps independent of the number of PUTS per sec?


Yes, specifically this test is 1 in put paying 100 outputs and it takes test tests::bench_reissue_1_to_100 ... bench: 5,289,357 ns/iter So this is independent of anything. These can also happen concurrently so I would expect we can push this figure significantly higher.

Trade offs are we need to check an Xor address per input (to see it’s not spent). That’s an async call, so we can probably have a large number of these per second. The clear thing is this is significantly faster than Visa and an awful lot faster than bitcoin.

Overall I feel we can really expand this easily, even with just bls batch sigs (which we need to code as threshold_crypto does not have these in place) and some parallelism I would think we easily 2-3X these figures.

What we are doing here is also (in addition to normal crypto payments) allowing the creation of put contracts/payments. So these would be the same throughput. i.e. we can get 100,000 chunks go through self encryption, get the names and then mint the DBCs for them in under a second. Then we can put at our leisure. That algo is in docs right now and going into POC any day. I hope T7 has that in place.


Why 100 outputs?

My expectation is:

  • one t per PUT to credit the section wallet
  • upon section split, N x t to issue a reward to each of N nodes in the section

Maybe the 100 is just part of the test rig?


Yes, this is a benchmark for us. All part of testing. We will increase the benchmarking as we go, to show many different processes. i.e. mint several thousand inputs against hundreds of thousand outputs and so on.