Governance (and government intervention) of updates to the SAFE network


#41

Ok, let me finish with referencing the Conclave with the most scientific looking study (Consensus-building in Electoral Competitions: Evidence from Papal Elections) I could find about it, which starts with the following:

Papal elections are outstanding grounds to study consensus-building in an electoral competition. In contrast to standard two-round elections, the conclave lasts until a candidate receives the two-thirds of votes. In this paper, we argue that this election process can be viewed as a “war of attrition” between two fractions: the “conservatives” and the “progressives”.


#42

Well maybe, but I’ve found if you start with a system engineered to deceive and benefit the powerful then you do not end up with a good system. This one has been honed from the time they started doing it. Many years after the catholic system was invented which was some 1600 years ago.

Remember all these cardinals are lawyers in their own right. You cannot be a cardinal without being a lawyer. Just because there are elements of common sense in the 2/3rds doesn’t make the process non-deceptive.


#43

RE @JoeSmithJr

the premise that Maidsafe could “forcibly push updates” is false

Is it? Seems like you’ve overlooked all the details above… If you don’t have an authoritative push of updates into the network then you hit the same governance problem that bitcoin has when it comes to hardforks, otherwise known as backwards incompatible updates.

But with bitcoin you can still derive value from both chains that get forked. With the SAFE network, as discussed above, forking means something completely different. Easy to fork the code, hard to fork the network. Which means that an insecure update could potentially render the existing network data insecure or perhaps compromise future use of it.

This project is open source, so there is no way to do anything forcibly.

Also discussed above - the code is open source, sure. But as far as I understand - you won’t just be able to run any SAFE vault code you want. I thought I had previously seen comments from David suggesting that updates will be automatically injected into the network and vault nodes will automatically update themselves. Maybe I’m wrong. But if that’s not the case, and you are able to run any vault code you want, then we have the problem where people would have to voluntarily adopt backwards incompatible changes, which is same governance problem described above for bitcoin, which could cause major problems for the network.

@dirvine talks about creating a technical solution, which would presumably be some sort of code running at some low level of the network routing/crust code that would automatically test the outputs on various different layers, and ensure there are no data leaks. But still I wonder who/how such updates would be proposed to the network in the first place… Would just anyone would be able to provide a new update for the network, to be pushed and rolled out to all? Unlikely, which leads to all the other questions proposed here in this OP

It’s likely that the project will be forked into a number of different but, on the network level, compatible versions anyway.

Read my comments about forking above. If I run forked vault code that wouldn’t necessarily help me access my data on the existing “old fork” of the SAFE network, unless my vault code complies with the existing network’s rules/requirements. Forking open source code is one thing… Forking a blockchain is another… but forking a network is completely different. Everyone keeps saying “oh it’s open source therefore people will just fork” - but it’s not a blockchain, you can’t just fork and preserve both branches of the fork - this is a network we’re talking about… you’d have to create a bootstrapper to copy the old network over to your new forked network, and we’re not even sure that’s possible - I certainly doubt it would be. At the very least you’d need an equally sized network just to host all the forked data, and then who knows about the economic implications of that and the forking of the safe coins.

maybe it’s feasible for different versions to run in parallel if they’re backwards compatible versions… but I don’t see how this will be possible when you’ve got a backwards incompatible version you want to push out to the community. As @dirvine says above, it’s possible if you govern as a benevolent dictator (but he wouldn’t want to do that), but those only last so long…


#44

It’s demonstrably impossible to enforce something like that. The beauty of open source.

Of course you can, as long as it can work together with the rest of the network. Breaking changes (hard forks) would have to be carefully planned and scheduled just like with other distributed systems.

I doubt the community would accept (or that Maidsafe would try to push!) a forced update process similar to what you describe. That would go against the philosophy of the project.

There should be updates but a forced update could be easily disabled without affecting the rest of the code. The moment something weird happened within the official code base, a group of active but unaffiliated devs would fork it, roll back the nasty, and replace the official (now distrusted) keys with their own. With enough convincing and showing proof of mischief, the user community would switch.

Such forks did happen before, for example when Oracle bought Sun and OpenOffice didn’t seem as open anymore so a group of the devs forked it into LibreOffice. The reason was different, but it’s a good example for how these things go down.

No, that’s exactly the same for bitcoin, ethereum, and so on as well. All of the forks meant a fork of the data (the “network”) as well. The new blocks in the forked blockchain are not necessarily compatible with the old or other version.

A fundamental point about the Safe Network is that the clients never send the vaults anything unencrypted.

A compromised vault may leak some information, such as IP addresses or their own encryption keys for the onion-style message encryption, but even then it would need an unbroken chain of compromised vaults for it to matter, and even then they couldn’t disclose the actual data.

I agree that would still be a bad situation, so it’s a good thing you brought it up.

Update process idea. I would think a delayed auto-update process would be preferable. Deterministically built code (vault, client library, etc) would be pushed out but they wouldn’t be applied until independent reviewers signed them off. Users could hand-pick their favorite devs whom they trust for such reviews to remove any centralization from the update process.

Hard forks are usually done in a way that the new version implements both the old and the new protocol, and there’s a deadline at which point the software switches to the new protocol if and only if there is a significant majority of the nodes that run the new version, otherwise the upgrade is aborted.

In the end, the users will decide which fork they would want to use. In the beginning, it will be most likely the official version, made by Maidsafe, which I believe will contain code to help keeping it up-to-date. That code will check whether the update is coming from (compiled and signed by) Maidsafe itself, thus whether it’s trustworthy. I think that answers your question about who can propose updates.

If others will fork and compile their own versions, the update keys will have to be changed to their own of course. (We’re talking about really complex software, so I believe everyone will stick with the Maidsafe core code and layer maybe a different interface or small changes on top of it.)

Which fork will win, or whether several of them will coexist on the same network, or whether they will fork into multiple incompatible networks (I bet we’ll have 2-3), these will be answered in the future.

However, these aren’t the kind of things that could be decided by force, and that is my main point.


#45

With blockchain forks the historical data is public and common to both forks. When forking a blockchain it’s easy to simply copy that historical data to the new miners, but with a fork of the SAFE network the existing/historical data A) wouldn’t be public and B) you’d need to move all of that data to a whole new set of nodes participating on your forked network. A means you might not even be able to copy that data to the new network, and B is an economic problem. Blockchains are gigabytes, maybe terrabytes etc - so forking has been relatively easy. But the SAFE network will have more than just transactional data, so we’re talking magnitudes more data and therefore much harder to simply create a forked network.

Therefore a new network from forked code would likely have to start from ground zero in terms of historical data. That’s not a deal breaker at all, it’s just a very different from the simplicity of forking a blockchain.


And while I dispute a lot of your other points - we seem to be going in circles and my points are not being addressed directly on the head, so I’ll leave it at this for now.


#46

That’s not how blockchain forks are done. The old data is just there, the same for both the old and the new fork, no copying is involved.

With the Safe Network, any hard fork that would require copying the data would simply not go through; as you wrote, that’s economically infeasible so the fork would be rejected by the users.


#47

If that’s the case for a forked SAFE network then the vault nodes on the old network would need to participate/communicate with the nodes on the new forked network, potentially speaking in different/incompatible protocols. I don’t see how that would work


#48

Well, that’s exactly how it has to work during the transition phase during a hard fork anyway, except the other way around: the nodes running the new code has to speak the old language until the network as a whole switches over (or aborts the fork).

Basically, the same code that speaks the new protocol has to speak the old protocol as well for until the fork is accepted or rejected based on whether it reached prominence (which you could consider a form of voting) by the deadline. At least that’s how bitcoin hard forks are done and, to be honest, I can’t see how else it could work on a distributed system.


#49

Running a new version of a miner doesn’t necessarily mean you’re running the new backwards incompatible protocol yet. As you say, for blockchain, miners could signal their support for a new backwards-incompatible protocol by running a new version of a miner, which would write backwards-compatible signal data to blocks. If/when there is a majority then all of those miners could be triggeredto start using the new backwards-incompatible protocol. The point is, once the backwards incompatible changes are accepted by the majority - the old clients can no longer participate in the network unless they upgrade.


#50

But of course. Whoever doesn’t follow the majority is left out. No problem with that.


#51

OK so you now need a majority to fork = governance problems. I guess we’re using the term ‘fork’ loosely here… I assumed we were talking about split chains running separate protocols in parallel, not just soft forks that ultimately end up as merged chains.

As @dirvine suggests above - he still hopes to achieve a technical solution to this, which implies to me that the existing code of the network would make the decision about whether or not to accept an upgrade. Which means, crucially, that the end users wouldn’t make the decision


#52

But that is what can’t be enforced and, following this project for a while, I’m almost sure David doesn’t mean it the way you interpret it because that would introduce a type of coercion that is very alien to the core philosophy of the project. Now it seems we’re really going in circles here…

By the way, voting by accepting or refusing an update is a technical solution. Adding user-selected independent reviewers to the update process on top of the official signatures is also a technical solution.

The important point is to let the users have a say in what can and can’t go.

Well, that’s direct democracy, which may or may not be a problem, but it is certainly a way to make decisions = governance in action.


#53

from here, quoting dirvine


#54

That is the framework that we’re talking within, yes.

How to make sure that stuff that shouldn’t go through that process wouldn’t is the question that I thought we were discussing. I believe the best way to do it is by a) adding independent reviewers to all updates and obviously b) voting for hard forks.


#55

Also from that thread… sorry can’t quote all of this properly while preserving the usernames…

image


#56

This is certainly a failsafe, and a good point to remember. If a compromise does make it into the network then at least the existing data will stay encrypted.

However, that doesn’t guarantee that future access to that data wouldn’t expose it - e.g. as you suggested if the new version leaks IPs or the vault client leaks some other unforeseeable flaws that expose identity/data somehow.


#57

I’m not sure if I’m getting what David wrote right, but I’m very skeptical about automated checks. The halting problem, which is very basic and low level, shows that it’s impossible to reason about algorithms in a general sense, and then we didn’t even touch on the problem of defining something as vague as “improve.”

Specific checks, such as for stuff being sent that shouldn’t be sent, are also hard. Supposedly, the compromised code wouldn’t send stuff unless it wants to, and that may be at a future date or restricted by more complex criteria, so the checks won’t pick it up until it’s late.


#58

This, I agree with you on!!! Couldn’t have said it better myself.

Which leaves us with what I said earlier:

For consensus in blockchain splits - miners can choose which chain to mine. Having a pool of miners divide themselves over update-consensus is only a moderate problem for blockchain. Take for example what happened when BCH forked, block times were enormous for a long time, and some dedicated miners slogged through it until the difficulty rebalanced. But for blockchain the historical data is preserved.

However, for SAFE, this consensus issue seems much more problematic. If all of a sudden half of the nodes split to another network - I don’t understand how you’d guarantee avoiding data loss, which would be a devastating issue, potentially terminal, for the network.


#59

This is incorrect. Many forks will be for UI purposes and this has no effect on the data.

So it depends on what the fork is for, & what the expected life cycle of the fork is. A UI fork may only live as long as the official version is current (barring minor UI changes say)

And in theory you could have a hard fork of the code where it never reverts back to the original, but if it maintain the updates to the protocols and rules then there is no need to fork the data at all and can continue to run the same network.

There is then soft and hard forks and now 2 kinds of hard forks where the hard fork still can use the network (& data), and another where it requires a new network. Maybe the first hard fork can be a firm fork where the code base is hard forked but the protocols remain the same.

Yes in theory you could. Each node is a black box accepting input and producing output. Every other node can only see the output the “black box” produces. Now the safest way to get it “right” is to use the current official version of the code, but it is in theory possible to rewrite the code in say C, C++ (or not) and have statistic gathering code, and whatever else you desire. And as long as it takes in the input and produces the required output then the other nodes CANNOT know what actual code is running in that “black box”

But this still predisposes that the network can run two versions since the code being tested has to talk to the previous version the network is running.

BUT of course I (anyone) could take the code and remove the automatic upgrade feature, but still run the rest of the code. And I can 99.99999% guarantee you that there will be a fork of the code every version made that incorporates that minor change.


As to versions, I would expect that some version changes would make versions older than a certain version/update invalid. So staged updates are the best where the old version and new version of the protocol can co-exist and interoperate. This is often done by the code inspecting the version field and processing it accordingly. tcp/ip V4 & v6 co-exist and so do a lot of other protocols.

People then have the opportunity to see the new versions in action and decide if they want to upgrade. I would expect though that most would be on the automatic update like windose and these can expect from time to time to have some issues. Usually resolved by some sort of rollback. Maybe when starting the node UI it can offer to rollback to previous version making it super easy for the user.