Update 02 February, 2023 [The feb2 testnet - Offline]

Teeny tiny testnet time! Following a successful comnet last week (thanks @josh :metal:) we’re rolling out a new testnet with 1GB nodes. And this time we’re allowing tiny nodes from home too. With additional stabilisation fixes over the last couple of weeks, we’re hopeful of seeing a successful split - so we encourage as many of you as possible to jump on board.

General progress

@anselme reports encouraging news on DKG (elder voting process), forcing DKG termination and key generation whenever a SAP (shared information about current membership) is updated. Stuck votes have been a problem for a while and, while still being tested, this looks like a definite step forward.

The Davids @dirvine and @davidrusu have been exploring the stable set idea introduced last week to see what a non-DKG/BLS elder set could look like and how that could handle forks.

@bochaco is looking at in-memory storage of data and seeing how it performs compared to data written to disk.

@oetyng has been working on simplification of code, especially in comms. In addition to that we’ve been looking into ways to have messages between clients and nodes forwarded without expensive deserialisation at the elders.

And @chriso and @roland have set up OpenSearch on AWS and are just tweaking it to enable highly detailed tracing.

Testnet Ahoy! :sailboat:

OK mateys, this week we’ve launched a new 42 node 1GB testnet into the briney. With a favourable wind, you may be able to join with nodes from home, or from your favourite cloud provider. One advantage of small nodes is that free cloud VMs should now be more viable.

The last comnet was encouragingly stable and apparently failed after filling up with no new nodes to join. Obviously, with small nodes this can happen quite quickly. So this time we’ll be looking to see if we can get a second split to happen without the network falling over. We’ll also be checking out

  • how easy it is to join from home
  • has the joining node memory issue been sorted?
  • whether performance is affected as the network fills up
  • whether DBC transfers are working properly
  • how smaller nodes with min 1GB - max 2GB and elder storage are working out

For the first time, we have an OpenSearch server set up to help us monitor this testnet. OTLP functionality is built into the safe binary, so that’s another thing we’ll be looking at.

Hopefully we will see a second split, in which case we will be able to test improvements to the relocation process we’ve been working on. You can see if your node has relocated by looking for RelocateStart and RelocateEnd log messages.

Getting involved

Once again, so we can test what we need to test, the CLI is limited to files of less than 10MB.

To get involved you can follow these instructions to set up the safe CLI. The testnet-name is feb2, and the recommended safe version is 0.69.0.

Joining as a node

To join as a node, once safe is installed and you’ve switched to feb2 as per the instructions, run:

safe node install


safe node join --network-name feb2 [optional flags]

Success is most likely with a cloud VM, but during the previous comnet folks managed to get in using nodes from home, both with a VPN and without one, so it’s definitely worth a try. Start with:

safe node join --network-name feb2

Then with combinations of --public-addr <your public addresss>:12000 and --skip-auto-port-forwarding to see if you can find one that works. You may also want to setup port forwarding on your router, but please note, NAT traversal is not implemented and there’s no guarantee of success. This thread could be helpful SBC Network? NAT nightmares.

Submitting Traces

We now have Open Telemetry enabled for the node binary, so you have the option to submit traces/logs from your node to us. The traces are stored in an index in an OpenSearch cluster we’ve deployed on AWS. To submit your traces, before running the node join command, set the following environment variable like so:

export OTEL_EXPORTER_OTLP_ENDPOINT="http://dev-testnet-infra-543e2a753f964a15.elb.eu-west-2.amazonaws.com:4317"

We’re still getting to grips with how to make use of the data in OpenSearch, so at the moment we don’t have something to show, but soon we will offer read only access to the dashboards. We’d also be interested in any contributions here from people who know or who have worked with things like Elastic/Kibana, who might be able to show us how to do interesting things with the data. If you have any experience here and would like to help, please let us know :muscle: :bowing_man: !

What’s helpful and reporting issues

Right now, keep uploads small. < 10MB / file.
There is a temporary limit added, and you will get an error if surpassing 10MB.

If you are consistently seeing issues PUTting data or retrieving data you have PUT, please run your command with RUST_LOG=sn_client prefixed to it (on Linux/Mac at least). The output there, and MsgIds that have been sent/failed will be key to debugging.

We’ll try to report back stored data sizes at nodes as we go, so we can see if there’s a correlation between capacity and reliability.

Please note, nodes are not evenly distributed in XorName space, so with a limited crop, we will not see data uniformly spread across them.

May you have fair winds and following seas.

This is what we’re using to verify data-storage just now.


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!


first !!!


High time I took another podium ← Is it time for your meds again?

So I did :slight_smile:


This test net sounds encouraging already. Great work Maidsafe. I may try to join tonight to help with that second split!


Here’s the initial storage layout. (Worth noting, that there’s been several rounds of tests run as more nodes were added. Earlier nodes will only clear up irrelevant data when they’re nearing the 2gb storage limit. So don’t expect to use these numbers to check out data spread I’m afraid.)

Storage space usage per nodes for feb2:
node-10: 690M   total
node-11: 604M   total
node-12: 627M   total
node-13: 471M   total
node-14: 565M   total
node-15: 325M   total
node-16: 289M   total
node-17: 624M   total
node-18: 796M   total
node-19: 248M   total
node-1: 302M    total
node-20: 633M   total
node-21: 1.2G   total
node-22: 564M   total
node-23: 671M   total
node-24: 474M   total
node-25: 503M   total
node-26: 325M   total
node-27: 669M   total
node-28: 252M   total
node-29: 518M   total
node-2: 694M    total
node-30: 588M   total
node-31: 419M   total
node-32: 509M   total
node-33: 592M   total
node-34: 280M   total
node-35: 480M   total
node-36: 542M   total
node-37: 480M   total
node-38: 472M   total
node-39: 493M   total
node-3: 1.2G    total
node-40: 547M   total
node-41: 519M   total
node-42: 259M   total
node-43: 309M   total
node-4: 322M    total
node-5: 1.2G    total
node-6: 322M    total
node-7: 1.2G    total
node-8: 504M    total
node-9: 684M    total```

What is that?


All good so far, but I’m getting a 404 error when attempting to download sn_node version 0.78.1 @ https://sn-node.s3.eu-west-2.amazonaws.com/sn_node-0.78.1-x86_64-unknown-linux-musl.tar.gz


Data they do not need to hold. ie, a 20 node network starts, data is put. Then 20 more nodes are added, and each of the “original” nodes are now responsible for only 1/2 the xorname space (on avg; ymmv) as before. Thus they may hold data that they are actually no longer “responsible” for (not one of the 4 closest nodes).

They keep this data, and can use it to update new nodes on churn, as well as the main data copy holders.

They will only start clearing it out when they’re running out of space for data they are responsible for.

Does that make sense?


laatest release of node is 0.73.2: Release Safe Network v0.15.2/v0.17.0/v0.2.0/v0.78.1/v0.73.2/v0.76.0/v0.69.0 · maidsafe/safe_network · GitHub where did you get that link? (it’s not in the OP is it?, looks like i’ll need to update it)


same here it throws this up when i try to install node

ubuntu@oracle-1:~$ safe node install
Downloading sn_node version: 0.78.1
Downloading https://sn-node.s3.eu-west-2.amazonaws.com/sn_node-0.78.1-aarch64-unknown-linux-musl.tar.gz...
   0: Error downloading release from 'https://sn-node.s3.eu-west-2.amazonaws.com/sn_node-0.78.1-aarch64-unknown-linux-musl.tar.gz'
   1: UpdateError: Download request failed with status: 404


Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.


It does, and sounds clever indeed.

And I am very happy to see splitting being tested and enthusiastic to participate myself. However I’ll try to hold my horses until a few of us have joined from cloud VM’s as I don’t want to break it too soon. :sweat_smile:


try safe node install -v 0.73.2


:+1: Thanks


Either already broken or I made something wrong again.

d:\SN>safe files put 85329768_p0.jpg
FilesContainer created at: "safe://hyryyryinzdqskmkbb8qw5jemcjwqsoxnzsxspxr7ezaz
| E | 85329768_p0.jpg | <ClientError: Did not receive sufficient ACK messages fr
om Elders to be sure this cmd (MsgId(3b08..ae25)) passed, expected: 7, received
0.> |
1 Like

:man_facepalming:t2: or :partying_face: ?

safe node join --network-name feb2 --skip-auto-port-forwarding
Storing nodes' generated data at /home/safe/.safe/node/local-node
Starting a node to join a Safe network...
Starting logging to directory: "/home/safe/.safe/node/local-node/"
The opentelemetry traces are logged under the name: sn_node_Kz4cAniBhJ
Node started
OpenTelemetry trace error occurred. Exporter otlp encountered the following error(s): the grpc server returns error (The service is currently unavailable): , detailed error message: error trying to connect: tcp connect error: Connection refused (os error 111)
safe@ubuntu-2gb-hel1-2:~$ OpenTelemetry trace error occurred. Exporter otlp encountered the following error(s): the grpc server returns error (The service is currently unavailable): , detailed error message: error trying to connect: tcp connect error: Connection refused (os error 111)

Checking now, looks like the oltp server is not responding but your node should be OK


Very odd. Not sure what’s going on there. @chriso can you cast your eye over this when you have a sec? :bowing_man:


I had that too and still a 404 with safe update.


That JustWorks - thanks @neik
I have incremented your beer account.
You now only owe me 1.7 days production from Williams Bros of Alloa.


Am I blind and missing something in syntax? Or does this happen to somebody else?

[petr@blackbox Dokumenty]$ safe node join --network-name feb2 --skip-auto-port-forwarding --local-addr --public-addr
Storing nodes' generated data at /home/petr/.safe/node/local-node
Starting a node to join a Safe network...
error: Found argument '--public-addr' which wasn't expected, or isn't valid in this context

	If you tried to supply `--public-addr` as a value rather than a flag, use `-- --public-addr`

    sn_node --verbose --skip-auto-port-forwarding --local-addr <LOCAL_ADDR>