Analysing the google attack

OK, please correct me if I’m wrong, but it seems to me like that resource proof happens after the vault has generated the key/name and tries to join the target section. If that’s the case then it’s still relatively inexpensive for a malicious vault to brute force a ‘vanity’ address that matches its indented target section and only perform the resource proof once.

A way to add proof-of-work into the key generation stage would be to consider the first n bits of the public address as a required pow, i.e. first 6 bits must be 0s for it to be a valid address, and only look at subsequent bits for section matching.

Again, I haven’t yet looked at the code, so apologies if this makes no sense or if it already works that way.

1 Like

This is similar actually. The resource prof sets the leading zeros and data size. The key the node generates is not the one it is allowed to use. That key gets to a group that then finds the new group the peer should generate a key for.

1 Like

It’s possible. My main concern is joining should be easier not harder. ie preventing a google attack should not come at the cost of preventing the participation of enthusiastic beginners.

See RFC-0030 Nodes Key As Name. Specifically step 2 - “Group (X) then take the 2 closest nodes to this [random key] and hash the three values (2 closest plus node).”

The targeting can be done by generating keys offline and testing the resulting section per step2 until a key is found that puts the vault in the desired section.

So it’s a bit more involved than just ‘generate a key for a section’ since it needs to also hash from that section to the desired new section based on the members of that section. Harder than finding a key for relocation, but still possible.

Yes. So targeting takes the required portion of the network an attacker needs from about 50% to about 5%.

This work very nicely if the random address is indeed random. But it’s proposed that it’s not random. It’s a hash of the attackers chosen pubkey + some section data. Since the attacker can choose their pubkey they can manipulate the outcome (assuming they can also know the section data). Importantly, the attacker can do the targeting entirely offline so it’s potentially very fast.

It mostly affects the viability (ie cost) rather than the possibility. I have assumed that targeting is possible, even if it’s a bit more fiddly than simple ‘vanity-gen’ style algorithm. Maybe that assumption turns out to be incorrect.

Yes I think this idea of relocating to ‘help create a split’ contributes well to the definition of ‘best’ section to relocate to. I like this idea.

I guess ‘helping prevent a merge’ is also a very high priority, especially since merges are often more complex and time-consuming than split so are less desirable.

My sim treats any departing node as a potential for relocation, so I need to update this to consider the difference of complete vs incomplete and elders vs all nodes. Thanks for the clarification. This will almost certainly affect the numbers, but probably for the worse since it results in less relocation.

I would love to see a proof-of-signature for this (I wrote about this a while ago on the dev forum) so the proof of work is related to real work to be done on the network. ie the supplied proof is to find a message that when signed with the keypair results in a signature with a certain desired property (eg X leading zeros).

The tests are starting to become incomparable so my focus for the short term is to build a standard set of metrics that can be used for meaningful comparisons between changes. Ideas for metrics are most welcome.


Absolutely, the key here is the random data. There are a few ways to achieve that or at least sources of random data such as last valid block hash (in the data chain). This wont be agreed by the whole group, but should be agreed by the quorum (that is all that is needed) most of the time. This means that there can be errors when the group is out of sync (normal) and the joining peers key is then rejected (from group X). So it has to create another key and try again. It will not be 100% sure to get a key accepted, but should not be many retries, if any, in normal operation.

Previously we did use the closest 2 peers to the chosen address and hashed al of that with the key to get to the Y group. Its OK but not as random as it could be.


I think this has to be a fundamental principle. Once you start down the path of making PoW in order to solve attacks then where does it stop.

Yes, any attacker can have a smaller number of very powerful number crunchers to solve things for the attacking NODES and since the NODES for the most part do not need the number crushing the ratio of number crunchers to NODES is small indeed. Another reason why PoW will not solve the security issue.

EDIT In times past the system was that a node joining a group (even after relocation) generated their key then the group it contacts then gives it another key thus making it harder to target a group. That seems to have changed now and is a loss in my mind. Even using a Fn of the node’s key + section data is making it easier to attack.


Certainly but just not at the cost of the network having a negative effect facilitating the same. A balance here like everything else is key imo. Note I’m not referring to PoW or any arbitrary complication but just in general, its easier almost asking the question of how any X helps the network than what sounds nicer. Allowing everyone to join with ease while it sounds great might also not be the best for the network if these nodes didn’t help much but increase the hops for general operation. Thereby also helping prevent spam attacks and sorts…

I’m slightly confused with the targeting scenarios, so if we consider an example of a node being able to calculate where it’d be nice for it to join the network and generates the initial key accordingly to then get relocated say to a pre-calculated target Y section(if recursive relocation dst is computed too), not sure I get how this is affecting the network btw if this node still isnt an elder of the section and in this case is an infant(new node) and is going to then get relocated later on based on the churn events or so on … Lack of quorum and potential node age loss(thereby loosing elder privileges and so on if trying to restart) should still hold right? or are we discussing something else here in terms of not wanting even a single node to choose even its infant section if it so went about trying to do so?


Just had a chat with @dirvine there about ^^ and think I got what the discussion was about in terms of a balanced network and relocation knowledge being local based and thereby allowing a group of nodes to get relocated roughly grouped and allow for potential attacks while the nodes are ageing themselves.

While discussing that what we were thinking of is ofc see how likely such a thing would be via simulations, but for a moment if we consider it is likely and we want to avoid it even if just being safe, couple options ofcs exist wherein like the thread so far has explored use randomness to achieve the same.

So simply not have relocation based on just the section that needs a new node and range to relocate to, but also structure it such that when a relocation trigger say occurs(assuming for now its a churn event), use that Hash(churn-node-id-xor-name) as the dst section where the relocated node is being sent to, however when the rpc gets to this dst section, they can then do the balanced relocate themselves which if still happens local to them is still not hopefully deterministic from the src.

So in essence have balanced node relocation and a relocation range, however make the first hop of the relocation be based on the randomness(hash of churning node name) and then let the balance relocation kick in recursively which should hopefully spoil being able to compute rough dst locations after the initial join process.


Thanks for your interest in all of this!

As I mentioned in another thread on the dev forum, your observation that the simulation doesn’t take off right now is entirely correct. This is because it is outdated, and doesn’t actually use the same rules as @mav’s simulation. We were aware that it needed more work, but other issues with higher priority appeared and updating the simulation has been postponed.

Since we are now ready to resume work on it, let me outline a rough plan for the nearest future here.

First, to address the problem that the initial section doesn’t split, we will modify the rule for splitting a bit. We were requiring enough elders and adults in the section, so that the new sections will each have a complete group - this will be enforced only if we already have a complete group in the current section, otherwise we will just require that the section has enough peers for both of the resulting section to have GROUP_SIZE+BUFFER peers (as opposed to peers with age > 4).

Second, and this will be a quick test - we will run our simulation to see what happens if peers join at age 4 instead of 1. I see that @mav already conducted such tests, but getting more independent results will always be beneficial :wink:

Then, when we have those results, we’ll see if we are satisfied with the way ageing works, or if we want some more changes to the rules in order to make the network operations more smooth.


us satisfied? you’re a changed man in the new year @bart :wink: I’d suspect we certainly need additional relocation triggers even something like mutation operations being used accordingly for the same(not saying GET here cos its a free action that can get spammed too ofc) just so we dont end up with stalling sections and facilitate network growth.


Thanks for your clarification and glad to hear what you had to say

To be clear this is what I was referring to.

There will always be a certain amount of work to do anything and the balance will mean we are not looking to PoW to solve security issues which should have a cleaner simpler solution.

One would expect of course there are minimum requirements. We just don’t want to see people joining the network but their NODE is at 50+% CPU (or GPU) just to prove they are a good guy and not an attacker. The easier is in terms of being able to use cheap hardware and SAFE using logic rather than PoW to “prove” genuine (or reduce effects of bad actors). Also that the Node is able to be farming in a reasonable time.


I thought this might be some food for thought. Not sure if this will instigate any ideas…

mods: please move if you think it is off-topic.


This is an excellent point, thanks for wording it so eloquently.

I would like to present an equivalent perspective for ‘joining should be easier not harder’ but from the opposite direction: ‘gaining power here should not make it easier to gain more power there’.

This is why bitcoin mining kinda sucks. It became a race to ASIC and the leader got more advantage over second and third place, which they used to increase their lead further and so on. I’m not sure there’s any way to avoid this, but I think the hueristic ‘joining should be made easier not harder where possible’ is a good start to prevent snowballing benefits to leaders.

Thanks for making this point though. it’s never about absolutes it’s about balance. We should consider the drawbacks of making it easy to join, not just the benefits. I guess my bias is clearly toward identifying the drawbacks!

It affects the network because the vault will eventually become an elder if it hangs around long enough, and it knows it will be an elder ‘somewhere close’ to where it currently lives (due to neighbourhood-based relocation).

So the attacker is targeting ‘recursive neighbourhoods’ rather than a specific section. The thing is, the degree of recursion is low so the attacker will always stay relatively close to where they start.

This is a clever idea. Gets the vault far away from the neighbourhood but retains the benefit of helping the section most in need.

I feel the same way. I won’t ramble here but this ties in with biomimicry (or in this case sociomimicry?) and my views on economic inequality in society today. We should be targeting a minimum-level-for-all rather than equality-for-all. This is a bit of a hobby-horse of mine so I’ll leave it at that!

The word that comes to mind is ‘efficiency’ and this is where bitcoin and blockchains have dropped the ball bigtime. If it isn’t efficient then one of the big points of difference is lost.

That’s awesome!


Some more things to ponder along these lines in this RFC topic just posted. A bit too technical for here but definitely related.


Glad you’re pushing this.

If this was my area, and I was contributing here, this is exactly the ideal I would push for and try to protect.

Even if the network is slightly slower at the beginning or something, at least it’s properly decentralised, which it needs to be to change the world.

Glad these discussions are happening, would be nice if they happened years ago but all good let’s do what we can!


I think this is what we all aim for :wink:

They did and will for years to come :smiley: :smiley: This story never ends, but we need to just polish these small bits off to allow launch, but it never ends and is always interesting.


Looking forward to the day every MS, Apple, Google, Amazon etc each have a “Routing Maintenance and RD” division :slight_smile:


I agree that pow can be easily defeated (GPU, FPGA, or even ASICs). I can see alternatives that replace the offline key generation with some kind of interactive process that involves the existing network. There are schemes for cooperative key generation, “proof of wait”, or some other network-interactive process that either delays or randomizes the calculation of the target section for new vaults. I’ve already seen some suggested. Whatever works, just keep your black hat always on and look for the weakest link worth attacking.

Edit: but remember to KISS.

Have a read of @mav’s proposal. I think it satisfies the KISS


@neo yes! Was just reading it. Looks good.


This work you are doing @mav is great to follow, and the code for these simulations have been a real joy to work with.

I’ve re-implemented the google attack simulation in c#, straight off from your repository.
I did not get it 100% right though (see bottom for some details on results). Have never coded a single line in golang, but it has more or less all to do with lacking knowledge of other things I would say.
I know for certain when I have to introduce some non-existing logic to avoid error, that I did not get it right…

So, I have a couple of questions:

1. What prefix is supposed to be returned when parent() and sibling() is called on empty prefix?

Some notes about workarounds I did.

Empty prefix would have 0 bits, right? So these two would first of all try to access index -1 when empty prefix is passed in. (For a while I wondered if go allows same kind of indexing as python, but seems it does not.)
So, here I have obviously missed something.
Trying a simple workaround, I would have to return empty prefix as parent, and sibling.

What it results in, is that when empty prefix section should merge, and has more than one section, line 93 in network.go will always be true, and we will remove the empty prefix (then again on line 109), then on line 111 the empty prefix section will be added again, with a duplicate list of original nodes in the section. So, not good.

Taking a step back, maybe the thing is that empty prefix is not supposed to ever return true on line 85? But when looking at the conditions it wouldn’t make sense, since it just validates on count nodes with age > 4 and having more than one section. These should both apply to also the first section.

When setting condition if (section.Prefix.Key != parentPrefix.Key) for adding back the parent prefix, then I actually got to the reporting, and reached fairly close to your numbers.

But all of this gave a whole lot of warnings like:
Split has vault that doesn't match extended prefix
No prefix for xorname

And this actually leads to next question:

2. Do you have 0 warnings in your latest version?
I’m guessing you do.

Additional note.

Similarly to indexing problem in parent() and sibling(), method renameWithPrefix in line 33 in vault.go, would when I re-implement, get problem if prefix.bits is longer than vault name bit length. I cannot tell if this is due to expectation that prefix passed in shall never be longer, or that SetBit would allow out of range indexing or if I got something else wrong there. My workaround was to put a condition on vault name bit length being equal or longer. So, another source of outcome warp there.

Some results when running the above.

As would be expected by all the above, the results are quite warped too. They look fairly similar, except that this is for the non-targeted attack, which gives results close to those of your targeted attacks.
Additonally the hops are 0 and 1 only.

Google Attack

Targeted  Recursive  NetSize  Test 1     2     3     4  | Avg. Pct
      No        Yes     100k    4219  5883  2184  5842  | 4.4

Distribution of nodes and adults:

Test 1
age  vaults
  1  1468
  2  19113
  3  52377
  4  26276
  5  3442
  6  431
  7  92
  8  20
  9  6

    adults  sections
         9        23
        10        38
        11        56
        12        14
        13        15
        14        12
        15        16
        16        10
        17        11
        18        11
        19        7
        20        6
        21        9
        22        3
        23        6
        24        5
        25        4
        26        2
        27        2
        28        2
        29        1
        30        2
        31        1
        32        2
        33        2
        43        2
        51        1
        57        1
Test 2
    age  vaults
      1  1400
      2  19737
      3  52822
      4  26463
      5  3334
      6  439
      7  90
      8  28
      9  2
     10  3
     11  2

adults  sections
    9         24
    10        31
    11        44
    12        19
    13        16
    14        13
    15        11
    16         5
    17        13
    18         7
    19        10
    20         7
    21         7
    22         7
    23         6
    24         5
    25         6
    26         3
    27         2
    28         3
    29         3
    30         1
    31         3
    33         1
    34         2
    35         1
    38         1
    39         1
Test 3
age  vaults
  1  1138
  2  18437
  3  52255
  4  26027
  5  3286
  6  423
  7  92
  8  20
  9  4
 11  1

adults  sections
     9        34
    10        26
    11        52
    12        12
    13        10
    14        10
    15        8
    16        10
    17        6
    18        10
    19        7
    20        7
    21        16
    22        9
    23        8
    24        2
    25        2
    26        8
    27        1
    28        4
    29        2
    30        1
    31        1
    34        1
    37        1
    43        2
Test 4
age  vaults
  1  1481
  2  19111
  3  52856
  4  26978
  5  3418
  6  470
  7  91
  8  28
  9  7
 10  1
 11  1

adults  sections
     9        29
     10       32
     11       53
     12       14
     13       9
     14       9
     15       11
     16       11
     17       7
     18       11
     19       14
     20       6
     21       9
     22       6
     23       5
     24       4
     25       9
     26       3
     27       3
     28       4
     29       2
     30       3
     31       2
     33       1
     35       1
     43       1
     48       1

Repo here, but as mentioned above, not quite right yet:


This is very cool!

The empty prefix should be returned.

You have identified an error in my code because I don’t check for this boundary condition (see prefix.go sibling() L49 and prefix.go parent() L56). But I’ve not encountered the error so these two methods have never been called on the empty prefix during my simulations.

This should never happen since the empty section can only exist as one section on the network. When it splits into prefix 0 and 1 all future vaults will join one of those two sections. Those two sections may merge back into the empty section but in that case there is only one section on the network again so no further merges are possible.

There can never be the empty prefix and some other prefix on the network at the same time.

A new node should only be valid for one section on the network at any time. If that’s not true there is a problem with the split / merge simulation code somewhere.

eg if a network has sections for the empty prefix and prefix 0 simultaneously it means a vault with name 0... would match both the empty prefix and prefix 0 which is not a valid state for the network to be in.

I do not see any warnings.

There should never be a prefix longer than the vault name length (vault name length is constant at 256 bits). This would mean there’s approx 2^256 sections in the network which is pretty massive.