Brainstorm - The Network on the offensive

Some of these overlap but I’m just trowing things around. Feel to help refine these ideas if they at all seem useful.


Lie and be lied to.

Nodes that have specifically lied in the voting period should themselves be lied to. An attacker can not truly be certain of the reliability of their machine. Given this reality, if they are caught lying, they can be demoted from elder to adult without being told the reason. After which the honest elders relocate the adult to a section that is informed of the transgression. This dishonest node is tagged as such and never allowed to vote. To avoid alerting the attacker, the newly occupied section can falsly grant the dishonest node the status of elder while never truly taking into account the “opinion” of the node. This allows us to retain the attackers resources for routing and data storage while giving it the illusion of progress.


Intent discovery.

To discover the intent of younger nodes, the section could at times temporarily increase the age of some adults to the role of elder and issue a voting test identical to that of any other legitamate voting event. In effect these tests do no harm to the section itself and help to identify malicious players.

The abyss protocol.

This next idea is more difficult if not impossible due to the way chunks are XOR addressed based on their encryption.

The idea is to send ONLY DISHONEST nodes to sections of least activity. Or if possible, sections specifically designed as quarantine that appear to be legitimate. My reasoning for this idea is to keep attackers who attempt to flood the network from knowing their nodes have been discovered. Prompting them or their script to restart and simply try again. By sending them to some sort of network abyss we can slow down if not entirely prevent them restarting their nodes for another attempt at malicious behavior.


Any thoughts @maidsafe @mav @neo @tfa and others?



One draw back would be if the attacker has two elders in the section, then its undone since the attacker would be watching for this.

Again interesting and probably better than above. But is there a way that an attacker might detect when this is being done. Remember open source software means the attacker knows of the test and maybe too easy to detect.

I am wondering if the bad actor node can detect this by seeing a relocation after they have been bad for “X” responses. In other words the node knows its been relocated and if this happens after giving a number of bad responses then it assumes it was sidelined.

A section split would have the node relocated to a “subset” of its current section address.


They are certainly interesting ideas

Maybe another idea along these lines


Allow the bad actor to remain and any gossip to it is minimal and gossip from it is ignored. This gives the node the idea things are quiet but has minimal effect.

Then after a set number of events the node is finally evicted. This ties up the attacking node thinking its effective for a while thus reducing the amount of times that instance is retried

Now all these ideas rely on the attacker having uncoordinated bad actor nodes. When the attacker has coordinated nodes then these methods can be detected. Including the abyss since it would be unusual to have so many of their nodes in one section(s)

Also the attacker to be really effective would only activate the nodes once there are enough in the section to cause problems.


Ask my wife about how much of a maniac I seemed at 3am laying next to her saying to myself “one idea is to…no no it’s flawed”. :yum:

I knew these were issues but thought It is impossible atm to remove ALL bad actors. These ideas might at least help to minimize them.

Another one I considered is to use intersection spot checking.

A possible way to weed out malicious nodes is for known properly behaving nodes from several sections to agree to a certain voting test and randomly pick a section to issue it to.


Well in this case a known honest node could begin the challenge. Starting in the same way any other gossip event spreads. Informing all of the previously honest nodes of the goal. Intent discovery. A random set of adults are chosen in the section (if not those closest to elder), and given a voting event. From there its a matter of identifying the liars.

But if the attacker only activated one bad node and the other remain in the “good” mode then the attacker could detect this. Or if the bad actor had 34% elder nodes + 1 more elder nodes then the attacker could have the one extra bad node remain in “good” mode to see if the other bad actor nodes have been found out.

1 Like

I like this!!

The section could even give it bullshit voting events to mull through to improve the illusion. Let it age, let it vote, let it do everything it expects with no real effect. Then move it to another section assuming the ageing algo has rewarded it for good behavior. Section movement is natural SAFE behavior. If not properly coordinated the nodes will remain stagnant. If coordinated they rely on how thorough their scripts are. I think a significant portion of baddies will disappear with little network overhead.


Detection from coordinated actors is a given. We’ve established that. I want to bring the numbers down. Uncoordinated shits can still in many ways help powerful foes.

1 Like

What if the network just brings down detected nodes to absolute zero? Even if the coordinated enemy realizes their nodes have been shot down they have to start each node from scratch. Slowing them down for sure. Ageing penalties are too lenient for liars imo. :face_with_raised_eyebrow:

1 Like

Lie… as in a signed message containing incorrect information?

Hard to imagine why any vault would send a message that isn’t correct. It’s too easy to coordinate the ban because a) the signature on the lying message is an identifier and b) the content of the lying message cannot be forged. There’s no incentive to ever lie. Even malicious behaviour should not be a lie, it should simply be ‘the new truth’.

Perhaps a definition of ‘lie’ is needed. What is a lie and what is not?

Is a message signed incorrectly a lie?

Is a message correctly signed but with content that has invalid signatures a lie?

Is a delayed message a lie?

What lies have I not categorised?

I think lies are a) trivial to detect and coordinate the punishment and b) not going to happen in real life. Maybe I’m missing some nuance to this issue?

I think the idea of secrets and trickery and storing ‘incorrect’ state is very unattractive. It’s a clever mechanism and appealing because of that, but something about having states-within-states is troubling. What if the liar thinks they are lied-to while in quarantine so they produce a second layer of lie state? Not realistic since the first layer should be isolated but I am for some reason troubled by the idea.

Only if they are naively malicious. Any reasonable attack will coordinate and only attack when it’s meaningful. This test would probably not weed out malicious players, only incompetent / low-performing ones. Not a bad result. But since trust is so easy to switch it’s not possible to use past trustworthiness as an indicator for future trustworthiness. I don’t advocate measuring or testing trust (thus opening the debate ‘does age measure trust’).

They will know. A single honest ‘sentry vault’ owned by the attacker would be worth having around to help attackers know what the real state should look like, eg traffic rates etc. As you say, XOR design is what makes this idea very hard to pull off because it distributes everything so evenly.

Overall I really like the innovative thinking but it seems like a lot of complexity to handle a ‘normal’ part of the network (ie it’s normal for malicious actors to participate). It seems a bit of a level-two conceptual area, probably not appropriate for the base routing layer. But the problem still very real.

Getting the attacker to continue unwittingly using resources for no real benefit is a great tactic, but I think it’s hard to pull off. I’ll let it rattle around in my head for a while, see what else comes up. Nice topic!


I would call this a “platitudes” protocol with honeypot attached to document the bad activity unbeknownst to the bad actor, but in view by good actors = elders/adults, so they can adjust what they send in the way of “platitude” messages via gossip to the bad actor to keep them thinking they are still in the game. Every time they act badly, it gets recorded to a “sidechain/branch” which they don’t see? (the honeypot data on which the elders decide to eject the bad actor, but also keep track of who they were and the pattern of behavior should they re enter under another name and start doing the same bad actor shite…

Unless another elder node informs the newly instated nodes of the test it will see it as nothing more than a part of its new responsibilities. AIC?

From my understanding an event can be simulated by the members/nodes of the section by sending false storage, join, split, merge signals. I might have misunderstood but PARSEC tries to validate the claims of the oldest node in the event chain. This allows the section to use the best behaved elders to begin the process.

In a network attempting a high degree of control this truth is destructive. Destructive truth should be labeled in some way. I vote for lie.

A lie in this sense is an invalid response to a question with an already known proper respone by the network elected (i.e. elder group). Answers essential to network operation would be used to determine malicious intent. No acceptable node should ever have reason to lie about events apart from testing new nodes. If that makes sense. If an elder node sends it a message stating it just joined the group, the recieving node should be able to claim that event. If it claims the opposite then the section attacks it. Forcing them to play a game of trivia. They never know when it’s real unless they have a sentry in place. The little scattered devils around this world would need to step it up if they have any chance at survival in the SAFEnetwork. We need punishment that can’t be easily detected. We can keep these malicious actors as resources until they become unreliable.

Can we obfuscate section member XOR ID within a group by using peer encryption? I’m looking for a way to evade a sentry. If we can find one, we hit even the big players I hope.


Can groups identify disruption? If so can they call out to other groups for help? Can adults be temporarily summoned to help break disruption in some way?

If individual nodes can identify that consensus was not reached after several rounds, it should be possible for them to then read their individual data chain. In doing so they could direct their messages to the top 60% of nodes whos behavior has been exemplary. Within that subset of nodes another attempt at consensus is made while simultaneously alerting another section of the disruption event. If consensus is successful of course.

This would also require that the behavior of peers be logged. A simple sequential boolean of yes/1 no/0 can be recorded in the data chain or section cache. Using that log could allow for the top 60% to be identified and called upon during times of stress.

We should also consider adults that hold the greatest number of chunks as candidates for temporary promotion to eldership. A large enemy is likely to create a bunch of small nodes to reduce attack costs and maximize their node numbers.

1 Like

Hard drive capacity should also be considered before being granted the role of elder IMO. While large entities have vast resources, current storage technologies are very expensive. Forcing them to have a number chunks beyond a certain threshold could be a helpful limiter. My concern are mobile users. For this we could reduce the aging time to allow them to earn more rapidly thereby increasing participation incentive.

They may never become elders due to many reasons including the deployment of a proposal like mine but they will still serve useful functions. Like being called upon for voting based on their stability and performance. How well they behave during temporary eldership can also be logged and be a factor considered for full promotion.

Also nodes acting on their own will cause little problems. They will be ignored after being found and will take time to restart as new node and progress in age till they get to elder again and act bad.

So is the cure worse than the “illness”

You guys do know i’m looking for solutions. Don’t just shoot me down. I doubt network defenses are perfect as they are. Help out here. Throw some ideas into the ring. :sweat_smile:


Yes we realise that. But do we need a solution for singularly acting bad nodes? The consensus mechanism will point out bad elders because they do not agree. The ageing mechanism will delay these nodes becoming elders again.

My point was that while other things can help delay them further, will those mechanisms cause a complexity in the logic that is also another way to attack the sections. We need to explore both ways to help reduce the bad actor node issue but also the ways these additional mechanism may cause problems of their own. So I feel pointing out the issues is also very valid and necessary.


Yes. Like pennies they add up.

Not as things are from my understanding. They get halved. Not restarted from zero. Punishments in some cases should be more severe.

I’m aware of that. I’m throwing in ideas that would hopefully stimulate someone to see the problem from different angles. What was once complex might be simplified by another mind.

Let us. I know that I started the topic but I’m not the only one with some battle strategies. Whatever you’re mulling over, throw in. A half baked idea could yield positive results. :crossed_fingers:

1 Like

NO they get ignored forever. So they need to restart from scratch.

What the halving is to do with is if nodes switch off then back on again. Bad actors are removed completely


I didn’t know this either! Very cool. Is this in the RFC??

1 Like