Update 11th April, 2024

Many of you will have seen that we have a new testnet release out, following successful poking at the recent alpha network. Thanks to all those who have tried it so far. There seems to be one or two teething problems and we’re currently (at time of writing) looking into the dead faucet and possible revival mechanisms.

We are pleased to say the node manager is performing well now, so we advise that as the default way of deploying nodes as it provides several additional controls, particularly with regard to upgrading. However, the manual method (typing safenode to launch a node) will work too.

General progress

@roland has been making some improvements to the files upload process, including returning a summary after a successful upload, and controls over where the output is sent (print to screen or direct elsewhere). At a lower level, he’s made some changes to the sync process, updating the Rust libraries we use there. Plus he’s been working with @bzee on some AutoNAT/hole punching possibilities.

Indeed, @bzee has tested AutoNAT with TCP but the combination with the hole punching is problematic just now, so we’re seeing how we might work around this. For Quic, we’re still waiting for libp2p to provide a more mature Rust implementation of their Go library.

On the testnets, @chriso has been digging into issues with ARM builds and Fedora and Windows anomalies. He fixed an issue where faucet and testnet were mismatched, added a --safenode-manager-version argument to testnet-deploy, and is working on instructions to show how to upgrade nodes on beta without having to take anything down.

@anselme has been testing out the DAG code for auditing spends. He’s added tweaks for doublespend and poisoning. The latter means ensuring that spends identified as bad (i.e. doublespends) that come in much later do not poison the DAG (imagine you try and respend money you sent elsewhere a year ago by accident, the second spend will not work, but any child spends off of your initial spend will be fine). All is looking very nice here.

Meanwhile @bochaco has been testing a Sybil prevention algorithm based on a recent academic paper that investigates a potential attack on IPFS. When a Sybil attack is detected, nodes are permitted to widen their horizons and connect with others outside of the 20 closest, so that potentially corrupt close nodes can be overruled/ignored. Next step, implementation!

@jason_paul has implemented setup and deployment tasks for node upgrades involving test-net-deploy and node manager.

As well as orchestrating all the other activities, @joshuef has been further refining the process of releasing alpha networks based on internal testnets, and making those fully compatible with the beta net, so we can upgrade smoothly. He checked out the effect of adding extra nodes to the previous alpha as it filled up, but alas it was already too full to test the effects properly. We’ll try again on the next one. He did some refactor work to tweak how we choose replication distance and raised a PR so that a new wallet is generated if none exists, following a community suggestion.

And finally @qi_ma investigated failing data location verification tests, tweaked the store cost algorithm, and looked at logging anomalies. Connected peers (reported by libp2p and currently collected by RPC call) shows how many peers currently are connected to/from our node, whereas RT (Routing Table) shows infrastructure health (i.e. how many nodes do we know of, but are not necessarily connected to right now). Plus, in response to a community discussion, Qi raised a PR to notify peers when a node has been flagged as bad: warn!("Peer {detected_by:?} consider us as BAD, due to {bad_behaviour:?}.").

62 Likes

First again
… i should go out

20 Likes

I think I made the podium! And even after reading!

Fantastic update guys! Loving the progress and participating in the testnets.

23 Likes

Nice update… on an airplane… limited bandwidth… yay to being 3rd on the podium! :smiley: .

23 Likes

This is the highlight of my Thursdays!

I cannot wait to see this in the wild. I find myself scratching my head as I read log messages. I think that this will help me a lot.

21 Likes

Nice work team! Step by step the :ant: network is materializing & solidifying.

Cheers :beers:

18 Likes

Thanks so much to the entire Autonomi team for all of your hard work! :man_factory_worker: :man_factory_worker: :man_factory_worker:

And also to all of the moderators and testers! :man_factory_worker: :man_factory_worker: :man_factory_worker:

8mgh79

9 Likes

Great update, the pace is amazing, great job :point_left: :ok_hand: :blush:

Congratulations to the team, a big thank you to the testers and to our moderators who work like ants! :clap: :clap: :clap: :grinning:

I’d like to be sure - does this mean that duplicate spending will be detected when someone tries to make it and is blocked, or can they make duplicate spending but it will be detected when they make a re-payment?

This is strong news, fantastic progress :ok_hand: :clap:


No Safe, no wave.

6 Likes

Thanks, as ever to ALL who have worked to get us to where we are now.
The response time on issues raised by the testers is truly impressive a and not to forget the other work going into getting a solid CLI experience for the node-manager. Once we are happy with that I forsee a GUI coming right down the line.

You have no idea how much you lot cheer me up - even on a day like today spent mostly underneath a motor and almost totally AFK.

10 Likes

Could this be used as an amplification attack vector. One malicious node can upset many nodes potentially.

Malicious node joins the network as a good node doing its job and then after a while sends out these messages to all the neighbouring nodes at least 20 I believe. So one bad node then tells the 20 that all 20 are bad. Multiply that by say 20 to 50 malicious nodes randomly across the network. Then do that from multiple PCs. Causes trouble potentially. And if randomly with over 50 machines (2500 nodes)(and statistics says it will happen) that 2 or more malicious nodes end up each saying that a particular node is bad. That is 2 or more malicious nodes are close enough together they have at least one good node as a common neighbour.

Note: even with a million nodes the occurrence of 2 malicious nodes having common good node as neighbours is common with only 2500 malicious nodes. 2500 is 1 in 400 nodes and statistically repeating this process once every hour will see that happen many times a day

9 Likes

Thx 4 the update Maidsafe devs

Really need the faucet to keep testing :sweat_smile:, unless somebody
got some extra coins available

Great progress, we are so close, keep up the good work

Keep hacking and testing super ants

3 Likes

I think this is more as a warning to node, and not an actionable thing. Even if it was…

A - “Hey, B, I think you’re doing bad due to: XYZ”

B - Huh, weird. XYZ looks good from my end.
B checks with C and D, who report he’s going fine.

B - “Hey, A, everyone else thinks I’m fine. Maybe you’re the issue”

Even 1/3 of nodes of nodes starting spitting out “You’ve been reported bad” to everyone, consensus will still be “I’m a good node” and if those bad nodes are just spewing “you’re bad” to everyone, but acting “right” and doing what they need to be, so be it. If they start acting bad themselves, then they’ll get weeded out of the network.

I don’t see it being an issue, and will help node operators diagnose issues with their node.

I’m sure I’ve missed something, but on the surface, seems fine.

6 Likes

Oh for sure it is one report in most cases and not something that makes a node bad.

But given this can be done medium scale, and that there will be cases where nodes can be targetted by 3 or more malicious nodes it can create the situations were some nodes will actually be tagged as bad depending on how much the reports by other nodes factor into the badness algorithm. If there is no effect then why waste time sending out the messages. Just have nodes query other neighbours when it things another node is bad.

My post was more for the devs to take not and make sure these messages cannot cause trouble. It also increases the amount of work neighbouring nodes have to do if a malicious node sent out these messages too often

7 Likes

Now I’m curious because I thought these messages were happening anyway

“Hey, watch out for E, he’s not performing well for me”
^^ already a thing?

I thought we were just adding a send directly to E to let them know and logging it. Can someone clarify?

1 Like

I thought it was only when a node detected what it thought is a bad node, then it would ask other neighbours and they would then send that message to the node asking. And the message becomes part of the bad node detection was my understanding.

And that is why I considered that with this idea in the update that the nodes receiving these new messages would then count that as a strike against the node being reported on.

1 Like

This is where I think I read it different. If this is:
A sees B act bad and tells C, then C reports to B that A put him on his naughty list - I don’t like it.

If it’s:
A sees B act bad, and A tells B “You’re messing up, here’s why” - I like that.

If my nodes aren’t behaving, I want to know what the rest of the group is seeing so I can fix it. If I’m getting reported to the group before I have a chance to even know about it? Not a fan.

EDIT:
Nothing is going to prevent a bad node from sending out messages that are fake saying I’m running a bad node. This just lets us know it’s happening somewhere in our logs.

2 Likes

It’s kind of both. Your node is not performing, it will be blacklisted.

That will be reported back to you, so you can try and act on it. And with that information start a new node and hopefully not have a bad time?

We could offer more insight on developing badness… but im not sure it’s worthwhile. And attempting to allow bad peers leeway to fix themselves and come back can get into a whole lot of pickles I think.

So the current “you’ve been bad, figure out why and come back with a new peer id” seems simplest?

7 Likes

Gotcha.

Final question on the matter :
So from my standpoint as a node runner. If I get 1 or maybe 2 of these, maybe I had a bout of bad connections and node is probably OK. But if I get a flood I need to fix myself?

Or

One mssage means my node has been reported to everyone and that node should be killed?

5 Likes

Yeh, I think so. If one or two blacklist you, it could have been a temporary bump.

Realistically if we get a flood of these the node will shutdown and error out in some informative fashion (eventually).

This is all still being actively tested so things may well change here based upon feedback from everyone here!

11 Likes