Data chains and Safecoin anonymity

There are two points to think of here:

  1. Chains can checkpoint (agreed/known section blocks) or go all the way back to genesis.
  2. Number of groups decreases the size of the branch data.

So as many groups (vaults) as possible is good. However also check-pointing is not finalised yet, but I have pretty strong feelings on this one, as usual, but don’t want to influence design team. To me you can have nodes/clients ask a group for something and send the last checkpoint to that branch that the requester knows. It can be given the chain from there to the current request, however in a live network, the node can trust the network (via secure hop messages) will deliver the correct data. In a network collapse/restart though where there is mass segmentation and reconfiguration this process can become very powerful, at a cost of larger data transmissions.

tl;dr Chain size is proportional to group number in the most naive implementation, but should be easily pruned in terms of older sections/data versions to almost only the current data that fits a section. I am pretty convinced this can be even further reduced though. The design team are very close now to section security and eventually consistent total ordering. It will get much simpler from here. I feel we have gone a long way to collusion security and will go much further.

7 Likes

My gut impression is that if we keep the chain complete back to genesis of the network then it will grow and grow. I realise that traversal back to genesis is not done most of the time the chain is used. But still needs to be stored and it grows and grows.

I would think that if a number of groups/sections agree that the chain (their part/branches of it) can be pruned at “x” blocks ago then that is still secure.

Maybe if necessary the pruning could be writing the pruned part to immutable data so if the network ever regressed back to needing the pruned part it still can access it without it being a anchor to drag and be copied when [quote=“dirvine, post:19, topic:13930”]
however nodes will require to get the complete chain from genesis for their branch.
[/quote]

4 Likes

Absolutely it is. This is a crucial point though, nodes may have more space to store more. This is good in some ways. to look a bit deeper then consider the section actions.

Add Node
Remove Node
Split
Merge

We can and do control NodeAdd (by extension Split), but cannot control Remove (by extension Merge), so incentivising nodes to hold as much as possible (back to genesis is ultimate) means that mass loss of nodes (a thorn in my side I cannot let go of) means that we need to be able to recover from disaster, the worst disaster (possibly) being huge loss of nodes very quickly, leaving groups with no quorum (we have force merge, a good thing). In these cases the cost of the whole branch is good. Then there is a notion of a logical group where we only consider the group in a section closest to the prefix. This means we have almost 50% of nodes not live or active in the network, but storing this data. This allows 50% node loss, but not 99% loss, holding back to genesis does.

This is a part of the network that will be very quickly on us. I am keen to recover from mass loss or catastrophic failure (force merge to quorum), but balanced with 50% segmentation. It’s beyond what we need right away or on the radar of many projects, but I am keen this network does look at the worst situation and recovers from it as I know you are to Rob. I hope I am giving you the info to help you dig deeper here, it’s interesting for sure .

10 Likes

Yes thanks for the info. I have to admit to not having read the opt A/B yet due to other demands on my time. So this has been very helpful.

1 Like

Just a reminder. Snapshotting inherently requires trust. So a number of sections sign a snapshot and declare that this snapshot is the true topology of the network at a given point in time. As a new node coming online, how do I verify that snapshot is authentic and the signatures were from nodes that were valid members of the network when it was signed? Well, that proof was in that chain of blocks you just deleted and replaced with the snapshot…

Having a small number of sentinel nodes maintain the entire chain sounds like an interesting idea. They could monitor snapshots and raise the alarm if someone tries to pass off an invalid snapshot. Adding complexity has its own risks though.

5 Likes

Absolutely, however if you get snapshots/checkpoints on the live network from a live group (which is detectable) then you are OK. The issue with checkpoints is you will trust your own you got from a live network and these help the network recover. It’s not simple, but it’s not too difficult either. This is all about recovery though and ensuring the network can recover from 80% loss etc. but then not segment. We will be in that area very soon in design and the community will be involved as it’s not as involved as the work we are hopefully completing in routing.

4 Likes

I find this discussion very interesting and, if I am allowed, would like to raise a few points @dirvine

  1. Have you thought of maybe hiring a mathematician to help you model this ? Chain size proportionality, catastrophic events involving sudden large loss of nodes.You say you are keen to test catastrophic failure, but maybe the way to go is first to approach this issue mathematically rather than empirically.
    Intuitively we all feel that maybe there is a breaking point if the network lose too many nodes at the same time. It is worth trying to determine accurately beforehand at which point we might have data loss within a confidence interval, should we lose too many nodes. Does the first data loss occur at 51% nodes loss, 60%, 70% ?

  2. My feeling has always been that the Safenetwork will be more resilient to direct brute force attacks or DDOS, than anything else. However the danger might come from something more insidious: a threat from the inside. You have hinted several times now that all nodes won’t be specialized in data handling, some nodes will specialize in archives, others will only look at data identifiers. @oillio call them sentinel nodes. These nodes will be essential to the network consistency, and it will be very important that they don’t get compromised, so how do we chose them ? is it just based on their rank ? SAFE might be victim of long con-jobs when it come to nodes. What when a trusted white node turns black ? what happens when a node that has acquired trust over months, and is not just storing data but data identifiers gets wiped out intentionally. Do archive nodes run on a duplication-set basis as well ?

1 Like

Andreas Fackler, the head of the routing section, has a doctorate in mathematics, about set theory, from the Ludwig-Maximilians University of Munich.

8 Likes

Yes and many of the team are very strong mathematically, interestingly though that can be an impediment as well. In any case we simulate and test lots of this kind of thing in house. The current simulations for section membership are their own project now, I feel there will be many more. However it gets very interesting very fast as human behaviour kicks in. Some people run code for no financial gain, others only for gains. The human behaviour is very difficult to reason and doing that using pure maths is not always the best route. It’s a tool for sure, but not one to overestimate it’s use in all cases. We have had several math related projects with Scottish Universities modelling different parts of the system (usually 6 month projects, up to full Phd sponsorship). So I don’t ignore math, many think I do, like many think I ignore finance, but they are tools and although powerful tools, they have limits.

Data loss without force merge and various components would happen at a low % of lost nodes, if we implemented a basic system, with no force merge, no standby nodes and a few other components missing we would see not only data loss but network collapse/stall at (group-quorum)/group (whether we consider group as logical or a full section)

Using data chains with or without checkpoints means we can prove data valid with only a single block, even without a running network. So there is a huge opportunity here to get away from many attacks/errors and they will have math models (perhaps) that we can publish. I am happy if that is the case, bit not worried if it is not (like the model of how a bumble bee can fly etc. or how an ant colony works). [quote=“SwissPrivateBanker, post:27, topic:13930”]
sentinel nodes. These nodes will be essential to the network consistency
[/quote]

We certainly do not know that and wont know that without a lot of research. The language of the network post and indeed natural systems analysis is not supportive of that kind of thing. If we go up the food chain for instance from small nodes to leader based or specialist node systems the complexity increases dramatically (imagine we can model and create some bacteria in the lab, but not for instance a wolf, the reason is system complexity).

Yes :smiley: these are constant questions we ponder and work with as a team an community. There is more, what about wanacry that replaces vaults for bad nodes etc. These questions are the ones we look to see if we have neat solutions for (and hopefully document as we did in the attack section of the wiki), some we do but any that rely or lean heavily on human behaviour and finance we take the position of detailing and understanding such attacks and this is why the network must be as autonomous as possible and not allow human intervention / admin / settings to any great degree. It’s a constant set of such questions that makes it interesting. What we find though is focus on the next part and take account of the big picture, but don’t get drawn into long lists of orthogonal issues as it gets distracting.

In terms of good nodes going bad, we do mitigate this and will enforce further mitigation. Node age is a component of quorum, but not the only one, An older node will have more trust but no node will have enough trust to act alone. So dilution of nodes across the range helps considerably (node age is a big help here) but again is not the only point. Number of nodes xor close to decision points (possibly section prefixes) makes collusion extremely hard, chained decision trees then put many such issues beyond feasible. These patterns and functions are not final and will likely not be final for many years, however we can use overly pessimistic values for them in the short term, possibly slowing down the network at the cost of increased security (in terms of collusion vulnerability)

There’s a ton more though and it’s really very very interesting, but not scary (well not for me). The number of “fixes” is pretty large, but the real fixes are those that reduce complexity where possible, that’s the great part, when we find those.

14 Likes

Blimey. That’s my contribution limit reached! :laughing:

1 Like

Hey thanks for this extensive answer!
I appreciate so much you care to provide explanations on this forum, even to non-tech guys like me.
Now I feel miserable for eating up your time.

Looking forward to the next dev update. cheers.

4 Likes

Don’t. These discussions are educative for lots of people, far more than those who post. That’s a good use of everyone’s time and I believe that’s why David values them. It spreads the knowledge, which is good for the project, for general creativity and humanity in general.

In my own experience of design and problem solving, explaining and discussing ideas with novices is also valuable for the explainer. It often leads to new ideas either integrating the supposedly naive thoughts of others (most often) or sparking completely new ideas that we can’t attribute and so can claim to be our own :wink:

EDIT: it also gives me opportunities to big up the project :smile:

https://twitter.com/markhughes/status/874557096372690944

Every time I tweet a link to the forum it brings at least 300 twitter bots to the party :wink:

12 Likes

@dirvine I think that you need to keep the entire datachain (including signature of the sender) because otherwise people would be able to double-spend.

If Mallory pays Alice a token T, and then T goes on its merry way, then a year later Mallory may want to spend T again to Bob. If a section is subverted, Mallory can simply make the section forget all the states of T after it belonged to Mallory.

However, if every member of the section stores the history then it takes only ONE vault to report the double spend to Bob and then Bob will reject the transaction.

I think we already have something proactive like this with Proof of Resource, so that forgetting the chain’s head is caught and reported even before Mallory tries to double-spend to Bob. Any section member should be able to catch any other section member red-handed and kick them out of the consensus for forking (of which forgetting is a special case).

But in order to PROVE that a vault has forked a datachain, they have to have its previously signed other fork (which it then “forgot”). Right now there is no gossiped proof, only voting. So if a majority of a section is subverted, it’s game over.

Yes this is a process called pruning and floating data. All chunks will be resigned by new group members, which means the current group is the signers of the chunk. If you then add in pruning, where sections will communicate with a new genesis, say at each split, then we are in good shape. This is called checkpointing, so you only need to prove from last known checkpoint forward.

These and a few other mechanisms “may” allow pruning, but we need to get into the detail of those, especially now with PARSEC (which may make it easier, more natural).

1 Like