Thank you to everyone who had a look at the SN Testnet Review spreadsheet, I suspect this will be used more and more as we iron out all the little obvious bugs and UX issues with the testnet in each iteration.

We’ve now replied to everyone who logged an error in there.


Hey @nice we hope to officially build and support for arm platforms at some point, but it’s just not a priority right now.

I seem to remember reading comments from others who I’m sure have got it working on arm in this thread - have a search through and see what you can find.


It is very silent on the apology front isn’t it.



Usual suspects are very quiet, no strength of character to say sorry… but
I don’t think they had good constructive and honorable intentions anyway.

LOLing @ them.


It is the error you have is your target architecture was not arm.

We probably will, but it really is super simple to compile and link for arm. There are several posts showing that in the forum. All good.


will do my homework :smiley:


See: GitHub - happybeing/safenetwork-farming: SAFE Network (Test) Farming

and Raspberry Pi SAFE Thread - #7 by nbsp1

PRs welcome!


This is interesting. I believe it has interesting implications for the worst case catastrophe scenarios that have been discussed on the forum. So 7 lose 2 represents a 29% global catastrophic loss of nodes in an instant and the network keeps on ticking like nothing happened, correct? So whether it’s 7 lose 2 or 9 lose 3, or 12 lose 4, we’re still at the limit of maintaining 66% of the original set rather than ensuring that a 2/3 vote can be achieved with the resulting set. This also means that a larger global catastrophe where 50% or up to 80% of nodes are lost requires a network reboot? Seems reasonable as long as reboot has some guarantees.

In the past, the 8 copies of redundancy was theorized. I had always presumed that this protected against data loss for a 75% to 80% global catastrophe. I also got the sense that backup sections would allow for up to 16 redundant copies for greater protection (4 copies per section, one primary section and up to 3 backup sections).

Can you shed more light on this?

1 Like

** We are now taking Fleming Testnet v1 offline **

This is in preparation for the internal test and public deployment of Fleming Testnet v2


It better than that, there are 7 Elders but circa 20 nodes in a section . So we could lose 2 from 20 in that case.

It is also catastrophic recovery? So can a blip cause 90% loss, but then if those 90% reconnect quickly, it is different to lost forever node.

Yes reboot is a serious one, all nodes might end up moved, but the rule is nodes must never delete data without republishing it first.

So a few different things at play, but recovery from lost consensus should come first I reckon.

Another thing to consider is higher replication fact means less different data on nodes, so nodes fill very fast with extra duplicates. If we take replication to it’s logical end, then all nodes hold all data. So now it’s less than that, but what is safe?

So we can lose 2 Elders and be OK, so we can say we can lose 2 copies of data and also be Safe, so keep 3 copies?

There’s a lot of tweaking gonna happen here for sure.

In terms of how much protection we get, it’s down to many factors, more copies, more nodes, more admin.

So our goal should be what is the replication factor that makes Data as Safe as the network? Again though a massive amount of things to consider. I will defo take time and describe this with all its side effects as soon as I can. It’s really interesting.


You guys are an unstoppable machine.

Test nets go brrrrrrrrrr.


The first one I get. That’s when there’s not enough data on the network, a safety guard against sybil attacks, and you need to retry later.

~ # safe node join
~ # cat /root/.safe/node/local-node/sn_node.log
[sn_node] INFO 2021-04-13T10:55:18.014731066+00:00 [src/bin/sn_node.rs:104] 

Running sn_node v0.35.5
[sn_node] ERROR 2021-04-13T10:55:18.338546196+00:00 [src/bin/sn_node.rs:110] Cannot start node due to error: Routing(TryJoinLater)

This next one I don’t understand.

$ safe files ls safe://hyryyry6mw7uiufjbwxapfht8fdy3p89jxyrq7iiem9fx8h8xmwr1s1bm9wnra
Error: Failed to connect: ConnectionError: Failed to connect to the SAFE Network: QuicP2p(UnresolvedPublicIp)

Next up. InsufficientBalance.
I thought were not at this stage yet.
Can anyone hand me a safenet token?

$ safe files put ~/sur/Bòȥxr/ --recursive
Error: NetDataError: Failed to PUT Public Blob: Transfer(InsufficientBalance)
1 Like

Network is down…just wait for v2


There were few sections so why Testnet.v1 does not continue itself?

1 Like

It may have but the connectivity bug will kill it anyway.
Full AE will also put apid to some, but we are on that.
Then a bunch of smaller stuff, UX fixes, more cmd lines, maybe a browser. These will all continue I think.
Then we will be testing section recovery and consensus recovery.

When this is unfailable it’s Fleming. It’s gonna be a wild ride this one, we need to strap in!!!

Seriously I think section recovery and consensus recovery may see a bit more of a delay than just a day or so, but we will see. (BTW I think this goes way beyond any network project out there when we do this)


You may want to consider ensuring tokens are put to one side when that is likely to occur. They may need to be swapped for MAID.

Maybe you’re thinking of Maxwell?


They pulled the plug on it again?

V2 of the testnet is going to be released in a day or two.

What went wrong with V1?

It all went to plan.

They gathered enough information from v1 to make improvements and so launch v2.

I’m sure we will get a list of fixes.

1 Like