Back to basics: how Satoshi designed Bitcoin for censor resistance

Some questions arising from the following article, posted on twitter by the author bitcoinpasada:

The author is arguing that there are fundamental benefits, summed up as censorship resistance that arise out of the design (specifically PoW and small block size) that while they limit transaction speed, maximise number of nodes and therefore make it effectively impossible to censor / shut down.

Part of this is keeping the minimum bandwidth and computational requirements of nodes sufficiently low that a large number of nodes will always be able to participate, and and to "verify for themselves the information that was being transmitted across the network. This includes, importantly, verification of the total supply of coins, and that no doublespends are taking place."

So my first question is whether or how well Safe Network can match those goals: can any Safe Network node verify the total supply and that no double spends are taking place?

I suspect it isnā€™t a simple yes or no, but since the author argues this point is fundamental to the success of bitcoin as is, and therefore necessitates a limited transaction speed, alternatives that speed things up using for example PoS, will not be capable of censorship resistance or verifiable to the same degree.

The next point made relates to ā€˜shardingā€™ which is perhaps the most comparable approach to scaling as employed in Safe Network. A possible difference with blockchain sharding is that Safe Network does not require an overall history of all transactions (i.e. no distributed ledger, sharded or not sharded). This may nullifies the authorā€™s criticism of blockchain sharding designs, but brings me back to the first question. Perhaps this is where we have a trade-off, in which case in what ways does that matter?

Another question relates to the contention that Sybil resistance in blockchain is dependent on PoW because that limits the amount of data needing to be passed around and processed by individual nodes, and that attempts to get around this approach fail in the goal of Sybil protection. The author gives David Chaumā€™s ecash as an example, in which relying on a smaller number of trusted identities reduces Sybil protection significantly:

The reason such a solution is trusted is because it requires identities, which must be incorporated internal to the network. This could take the form of a hard-coded list of public keys, for example. These identities signing the blocks exist inside the network, in a sense ā€” and given their limited physical number, it is not hard to see that censorship is a real risk. A censor could take over the network identity of a signer, and users would be none the wiser.

Following on, is the contention that Sybil protection requires a node selection process based on some scarce resource. PoW, or PoS in contemporary blockchain designs, and PoR (Proof of Resource) for Safe Network:

The key to understanding Sybil protection mechanisms is that in order to limit who is able to write to the ledger, they require the ā€œledger writer selection processā€ to be tied to some scarce resource. In Proof of Stake, this scarce resource is identity. In Proof of Work, it is energy. Without a link to a scarce resource, there is simply no way to achieve Sybil protection.

Using PoW is seen as a clever way of removing identity from the ā€˜equationā€™ in favour of scare resource which exists ā€˜outside the networkā€™ (energy and computation), meaning that any ā€˜black boxā€™ node that can meet the minimum requirements of these resources can participate and verify.

I want to add storage space (for the ledger/blockchain), not mentioned by the author, as it seems to have a similar bearing to PoW.

The criticism made of PoS is that it also employs identity, distributing over the staking token holders and that this leads one way or another to centralisation and ā€œultimately to oligopolistic or monopolistic control of all the stake and rewards.ā€

It isnā€™t clear to me the degree to which Safe Networkā€™s approach might also suffer an inevitable centralisation of identity (and therefore censorship resistance) so this may be an interesting direction for critique. For example, another question would be whether or not Safeā€™s design is vulnerable to actors earning from the network being sufficiently advantaged over new actors, to be able use their earnings to finance an increasing proportion of nodes in the network, to the point where they can control a section, sections or the network itself?

I think this an area which MaidSafe are conscious of and have designed for, but we may need to wait for some of the maths to be formalised to answer all these questions properly.

The article is very helpful in understanding several key issues around both the scaling approaches being attempted for blockchain/dlt, and to provide a way to compare and contrast with Safe Network (something which @mav looked at in the distant past but might well be useful to re-visit: The safe network explained using bitcoin terminology).

10 Likes

Roughly in agreement with the article, Iā€™d assert what is essential to an effective decentralisation in terms of resistance to:

  • transaction censorship by validators and
  • undue concentrations of political power in the governance of the protocol & network
    ā€¦is maintaining low barriers to entry for any participant role in the scheme.

A PoS validator who achieves >50% of the stake can maintain their grip without further expense - an infinite barrier to entry can be obtained. In PoW of course there is always more energy and compute power to be had to neutralise the attacker and resume normal operation without any external coordination required.

Iā€™m hazy on Safe design these days, so please correct me if Iā€™m wrong but as I recall it the following things are true:

  • nodes which provide service to the network have to build a reputation of good behaviour over time to gain increasing share of data, revenue and influence over others joiningā€ .
  • node count within a section is hard capped (some limit like 32)
  • section count is soft capped by demand for storage

On one hand, time is available to everyone for the same price, on the other hand due to the capping of count of service providers and an inability for a newer node to gain more track record than an existing one (and no rewards for them while building track record), it seems liable to control by incumbents - a barrier to entry increasing by time. For censorship I believe nodes are kicked out automatically by the PoR routines, but for concentration of political power could this be a problem?

ā€  I have a feeling Iā€™m wrong about who has say in new members joining a section.

Thereā€™s no reason to think your concern about choosing joiners is valid. Section elders would have to collaborate to favour any colluding new joiners, which means first you must control a section.

The other points sound about right.

The question I posed at the end was whether the rewards themselves earned by well-behaving attackers (sleepers) could would be enough to spin up increasing numbers of sleeper node, at a rate that outpaces good nodes. Would this allow an attacker to increase its influence over time to the point where a section is vulnerable and ultimately multiple sections and the network?

I think that scenario will be dependent on rewards v cost of running nodes. That is, how many new bad nodes can be financed from sleepers, compared to the number of good nodes also vying to join.

Thatā€™s a hard equation to reason about because higher rewards would increase both the number of good nodes wanting to join, and the number of new bad nodes that could be spun up by sleepers (bad nodes waiting until they can take over a section).

My gut is that this is not a major concern, but Iā€™m interested to explore that and any other issues raised by the article.

Thanks for the clarification though collusion still seems like a risk given its ability to keep out the competition.

I dont have detailed enough knowledge of Safe to have a useful opinion on your other point. If the competition is easy to enter then one would expect farming to work like mining such that overheads and revenues converge. In that state there isnā€™t much of a multiplier effect to worry about.

1 Like

Another thing that makes bitcoin censorship resistance is how hard it is to change the code.

What would happen if the UK government required Maidsafe by law to KYC all their new users. Say they had to make it part of the sign up process? How does Maidsafe protect against that?

The network is permissionless, Maidsafe are not involved at all in the creation of a new account. So even if Maidsafe wanted to KYC, they couldnā€™t.

If the government forces them to change the code, well the project is open source, so someone could make a fork and things would keep going as usual.

6 Likes

So if someone made a fork would that make a new network? If so would all data on the original network only be accessed through the KYC code Maidsafe were forced to write?

Iā€™m really trying to understand how the network will be upgraded in the future. If Maidsafe can change the code after launch very easily they I see that as a problem going forward. I remember a while ago David saying he would like Maidsafe to become just another developer of apps on the network competing with everyone else. So the network upgrades would be done my the open source community. I could see this transition taking a few years. Am I missing something?

1 Like

Yes, seems likely.

Depends on if everyone decided to support the upgrade. If they did then yes, the data would then live under the KYC version. If no, then the network would continue on as before. If split fairly evenly, then potentially the network could split into two different versions (maybe? Not clear if even split would guarantee all data would be available after fork).

As with any open source project, they can write whatever they want. It doesnā€™t mean anyone will run the new code. (This is why auto-upgrade is frowned upon in distributed consensus projects)

2 Likes

In its anticipated from, MaidSafe I just a protocol run by voluntary contributors/users by running implementations of the protocol on their machinery. If/when the SAFENetwork gets to that form in a stable, bug-free and functioning manner then MaidSafe the company isnā€™t in a position to gateway access (aka hold users to account).

The regulatory risk the company faces is whether they undertook an unlicensed securities offering in the past. If that unlikely event happens before a successful mainnet launch then the network may never be created as (I believe) no one else is working on it.

1 Like

Censorship resistance is complicated, but I think it primarily comes down to two things:

  1. The ability for users to easily validate the work of the network & punish nodes which try to break the rules. In Bitcoin the full nodes make it very difficult for even centralised miners to cause harm.
  2. The ability to hide which user is making a request or what data is being stored. In Bitcoin transactions are transparent so miners can choose to censor transactions even though this is very rare.

I think we should assume that SAFE becomes centralised in the same way as PoW/PoS systems, because of the economies of scale.

Iā€™m not sure what user validation is in SAFE, or what prevents nodes from censoring storage requests or get requests for public data they donā€™t want to be involved with.

If a node misbehaves in the ways you mention at the end it will be punished (be demoted which reduces earnings or ejected from the network forcing it to start from scratch).

I donā€™t think itā€™s fair to suggest Safe will centralised to the degree of other systems. Weā€™ll have to see about this, but we canā€™t assume one way or the other because it is quite different to those other systems, both of which have strong incentives/mechanisms that centralise and lack any balancing pressure to decentralise which Safe does have:

  • the ability for anyone with commodity hardware to run a node (ie extremely low cost of participation)
  • higher profitability for those able to use existing resources (hardware, storage etc) that would otherwise be wasted, compared to higher costs of those purchasing cloud servers to run dedicated Safe nodes

There are several other factors involved which is why I donā€™t claim this prevents centralisation, but I believe it is a good reason to think it could, and so I donā€™t accept your assumption that it will become centralised like PoW or PoS systems.

1 Like

Who punishes the nodes? Farmers or users? Thatā€™s the key question for me when it comes to defending users against centralisation.

Participation may seem cheap and it will be at first. Perhaps if will be cheap forever if SAFE remains small and the farming rewards are low. However, if SAFE gets large and the rewards make it worthwhile then economies of scale kick in and cloud providers will become the most powerful farmers. Consistent storage and content delivery is already highly optimised by professionals in datacentres, so it wonā€™t be possible for home farmers to compete.

Even if farmers are using commodity hardware or their existing hardware, they still require the skills and technology to maintain consistent uptime which is the bar to being a powerful node. Storage is easy, but my experience in IT infrastructure tells me that uptime is really hard. SAFE may make it easy to store some data as a home farmer, but it canā€™t help with maintaining a consistent uptime, so in the end uptime is one of the powerful centralising factors.

We donā€™t know for sure how centralised SAFE will be, because we donā€™t know how popular it will be, but we should prepare for the worst-case scenario, which goes back to be original question about how users (non-farmers) can validate the work of the farms.

1 Like

The elders.

Well for big file Safe may very well be much faster. The first chunk may take slightly longer (400mS instead of 300mS) but the rest will be waiting to come down your internet link.

Servers is a serial delivery method with the one server pumping out the file. Safe requests each chunk form the network and the chunks will end up coming from different nodes in parallel.

If a node responds first, or has more reliably responded in the past, do they gain an advantage over other nodes?

As far as i know for the test nets, no

No more than any other that has done the same.

Taking the negative then it does make a difference when nodes do not respond correctly. They will find themselves being penalised and at some stage dropped needing to rejoin.

It highly depends on where you live. I have no problem getting uptime in months on my home devices (no UPS). I spent 6 years working for a middle sized ISP and I can tell you there can be orders of magnitude differences in internet and powergrid quality in neighbouring towns.

Best defence against centralization is simplicity. Datacenter is very expensive operation (power redundancy, physical security, climate control, eployees,ā€¦). If average Joe will be able to run node and then spend no more than few minutes a month taking care of it, he is cheaper (profitable at lower reward value) than big datacenter.
Instead of uptime, we should talk about availability. Safe network is highly redundant, nodes dont have to be 99.999% available. I thing network already already allows some outages/updates/restarts without loosing rewards.

1 Like

This is an important question for assurance and it would be great to keep it simple.

I wonder it would be ideal if a supply of tokens was in simple verified limited pool (emulating real world capital and avoiding the error that is fiat) - the assigned ownership of each token then known at a single point in time, in a way that cannot be conflicted or duplicated, with ownership simply passed in a proven way that assures necessarily one owner at a time. This tempts a single point of reference and unclear to me if this is a significantly different model from DBC.

Slightly off topic re economics: While there is a place for fiat leveraging providing flexibility, the failure of fiat (or the conspiracy to exploit the RoW), is so wild that reverting to something solid is important now - in order that negative feedback can encourage real economy.

Having something real, is more attractive than something imaginary. :thinking:

In order to be a powerful node you must be reliable, or your left with the problem of low cost attacks by new nodes.

Pure simplicity would require all nodes, regardless of outages, uptime, performance to be treated equally by the network, but thatā€™s not the case. I understand that elders are the most reliable nodes with the best uptime and they control the consensus.

Datacentres are expensive because they are protecting uptime, which is the most valuable work on the SAFE network. Professionally run IT services go wrong all the time, but if they are prioritising uptime then they have redundancy, load balancing etcā€¦ in order to minimise downtime. Some companies can get downtime to a tiny percentage depending on how much money they are willing to spend. (eg hot data centres)

1 Like

As you note:

  • uptime has a cost
  • better uptime gives higher rewards

Thereā€™s a balance here where once there are enough high uptime nodes it costs more trying to become one than the rewards gained. This ā€˜back pressureā€™ limits the degree of centralisation that can occur.

Most cloud services have connectivity and downtime issues. It is actually very hard to avoid these even paying a premium because there are many causes of this, and you rely on multiple services and providers to maintain uptime and connectivity.

So even those paying for uptime will lose nodes age and/or be ejected from time to time.

This is all good and ensures that everyone has a chance of joining, and earning over time.

These decentralisation factors of Safe are not present in PoW or PoS systems, so we may well expect Safe to be more decentralised than those systems even in this initial design.

If necessary, the network could eject high uptime nodes periodically to limit the benefits of those with the resources who might try to centralise earnings or control. We donā€™t know if or when that might be necessary, but as it benefits the majority, I expect such a change would be accepted (if not part of the initial design).

1 Like

Thatā€™s an interesting idea about ejecting high uptime nodes, but I think making a consensus rule like this can be gamed and may benefit larger farmers by adding complexity. If I was a large farmer then I would look to optimise my farming by rotating through active nodes on mass. Basically, continuously switching out old nodes for new. If I was a home farmer I would be forced to have multiple nodes to keep a consistent income stream, so that means setting up VMs and orchestrating node shutdowns and startups.