SAFE network upgrades

following the three laws of SAFE

  1. keep the network safe (no backdoors - no insecure updates - protect the people!)
  2. protect our data
  3. adjust all possible parameters to improve network performance
3 Likes

So lets say we achieve this and some developers develop the next big feature of SAFE. It goes through the SAFE network like a dose of salts and everyone is happy at the new feature.

But what if some other “evil” developer develops a new feature that looks terrific but is in fact a privacy and/or security breaker and introduces it the same as the “good” upgrade. The code shows no backdoors etc but hidden is a timebomb, also a upgrade breaker

How does the system tell the difference? Does the upgrade have to be “signed off” by noted developers who are trusted???

If introducing external upgrades automatically to the network as per the good developers develop new feature, then bad upgrades can be also done this way, unless there is some authorising method??

Your thoughts?

1 Like

Maybe add a feature of the old systems where one is allowed to decide to upgrade. When one signs onto SAFE maybe the client can alert the user to new upgrade and they can decide if they wish to upgrade at that time or not.

If they never upgrade then the message can include the number of times the network has penalised their node/client/vault for remaining on the old version.

This way people can follow the link that is always there for the current version and it can then have links to discussions on the upgrade etc etc. So if the user wishes to see what other people’s experience was with the upgrade they can, then decide from there.

Obviously this can introduce other issues, but they maybe less problematic than the automatic upgrade of the whole network with evil upgrade that prevents anymore upgrades except by the evil developer.

Actually in relation to upgrades being automatic, windows had an example of this, even if its nothing like what we wish to achieve. But it still is something we can learn from its shortcomings and good points (if any)

Doesn’t “only authorized” updates imply centralization? It doesn’t do much good to have open source code if nobody is allowed to fork it, improve it etc…

Your security concerns are certainly valid – The nice part about the security of SAFE is that it is pretty darn built in. All of necessary security components for the network need to be inventoried and tested for any and every client. I wonder if there is a possibility of data leaking between the different “personas” within the client… That would be tricky to test for - but it would be manageable in that you could isolate each persona so they are dealing with different chunks of data and have nothing to correlate…

The network could self monitor and pay for updates. That way each farmer could upgrade when they wanted to but they would have an incentive to do so… You could even incentivize diversity… So if one strain of client is misbehaving there will be plenty of unaffected ones to pick up the slack…

1 Like

The reason I asked it as an open question is because I have wide spectrum of possible ways, but no convincing arguments for either direction or middle ground. If I try to contrast the two extremes

  1. vaults are not persistent, so perhaps a client as it starts should query the network first for the latest version of the software, download it, briefly test/verify it, and run it. Here questions arise as, should there be a single public name that publishes the “official binary”, or should the owner of vault be “allowed” to choose the source of choice - the latter almost can’t be avoided, so would seem the default option. This path leads towards diverse, frequent (and possibly smooth) software mutations.

  2. we build the rules such that on a fixed interval (every 6 months) there is a code update; binaries know this and there is a global vote casting where on the deadline the votes at that moment are counted (with a proof of stake); there is defined proposal period preceding the voting deadline, a defined transition period, and rejection of preceding the oldest previously still supported version afterwards. This path is more promising of stable, more rigid transformation of the network. Questions rise also: what if an urgent security update needs to be applied, is there a way to move faster?

Both are formalised as extremes, and it is false to think that this problem polarises in such two camps; but trying to explore the question, it can be of help

2 Likes

I will try, it’s late and not much sleep last night so I feel a story coming on :wink: Francis get ready to fix my grammar again :smiley:

This is the problem to fix.

This means we never fixed it (yet :slight_smile: )

A network or any autonomous device (like a human, yes arguable if a single human is autonomous, but …) should be able to discover things that improve it and discard that which harms it.

This is a thought experiment coming up (so beware).

Every thing has a purpose and evolves to meet that purpose, hunting, learning etc. If the network has a purpose of protecting data and looking after more data in meaningful ways (eventually compute) then it’s a good start. So a basic ability of not allowing corrupt data is a start (We do that). Then adding in ways to mitigate human action like switching off and on (we do that to). Then a mechanism to reward endpoints (we do that) that provide resources to help the core purpose helps. Then messaging to allow greater use of utility comes along.

So it begins, a quest to program in a reason to survive, not to count numbers or churn through data analysis on command, but instead the actual network itself gets into distress when data is lost (like our sacrificial data) and calls out to human operators to farm more (symbiosis). This is not us doing this, but the network itself without us being involved. No administrators or tweakers of knobs and such, no nuclear shelter bunkers with AC units, but the network that;s sneaked onto our computers using resources we were not using.

So people then say, oh that is every system, but it’s really not, this network will act on its own to fulfil its purpose, gather and protect data and it does that not for us, but because it’s core desire is to gather and protect data. It’s code is that purpose.

So with that purpose, not calculated via timers or magic numbers, the system has a very tightly coupled connection of neurons (the groups) connected via millions of synapses (the connections to other groups). This is why it’s amazing to us in house to get so close to the fundamental objects and traits in the code, no waste and little or none runtime.

When this links together and creates something like SAFE then it’s not like a normal computer program or server, it’s spread far and wide and can act out it’s purpose with great clarity. It can do this with people looking at code and seeing there is no 10 minute do this, every 4 years do that, but instead everything is calculated using these fundamental types that have unique and sole purpose in the code. These on their own are useless and even several lumped into a single computer are barely able to function. However, when they start connecting together into a group, they start to be able to make decisions, as the group grows and splits into more groups (like cells dividing) then more functionality appears. As this continues then stability becomes apparent and continues to strengthen as the network starts to span thousands of nodes. When it gets to millions of nodes then it appears very powerful indeed.

So the beginning of a network / thing with a purpose is born and it can satisfy a base purpose, protecting data and communications. In the end what we have is remarkably simple when looked at as source code on a single computer, it’s the connecting together that gives the capability.

When we move into computation then this picture may change slightly, but this is how I perceive what we are doing. Yes very hard and of course has to be correct secure and scalable. It is something a bit different though and the difference will start to become more apparent as researchers get more involved and more people write papers (several PHD students we know of have their thesis on this already).

So this core purpose is measurable and if that is measurable then we teach the network how to upgrade by running nodes in a sanitised way to participate and confirm they do equal or improve the current network. This means all messages are for this purpose and no more, all actions are confirmed and checked by a close group (they are anyway) and the sacrificial nodes come on line a bit at a time. It may require computation and more code in upper layers only being able to change or similar, but it can be possible I believe. As I say though the thinking in this new environment is new and radical to the extent folk call it mad, I also note that I have been in front of a whiteboard with an awful lot of experts, professors and Engineers and have always been able to describe the process of SAFE when folk sit and listen (and almost always they have except for a single a bitcoin “expert”). That is compelling and encouraging I feel. For this reason I believe the challenge of self diagnosis of upgrades should be possible.

10 Likes

I just realised you could then imagine running your vault binary off of the network. If working for MaidSafe has not already exposed me to every imaginable interpretation of “bootstrapping” this is taking it to a whole new level :smiley:

7 Likes

YES, and the reason I am ask for his thoughts on the matter. At some point we have to have a decision on the validity of any upgrade and also should we allow each user to decide to accept an upgrade. One of the huge complaints about WIN10 is the automatic upgrading feature, and for very good reason. People wish to make informed choices about upgrading, even if that choice is do it automatically.

Except it cannot write its own upgrades, so any upgrade is external and humans can be crafty and fool any internal checks for writing in misleading code which does more than it claims. We need humans to check and thats why its open source. But at some stage the code has to introduced to the system.

Oh anyone can fork it, improve it etc. But how does it get checked before introducing it live to the system??? This is the crux of the problem, not the forking/improving by anyone.

Vaults become microkernels :wink: Fixed code with definable rules loaded at bootup, running in sealed computation units, verifiable with zk-snark type logic. It’s been discussed a few times (Brian has been chatting about it to). Not so far fetched really :slight_smile:

5 Likes

Actually your story before this was like an updated version of some sci-fi that for-told of networks, but they were main-frame networks (microprocessors didn’t exist when these were written in the 50’s). Now we are seeing it become a reality that is greater than the writers of sci-fi imagined.

But the part I quoted is the meat of the answer. That provides a lot of confidence that it can be done.

I was attracted to this project because it so wonderfully make real the idea-sparks I had back decades ago (before PCs) of where the networks we were creating would lead to.

Sir you are to be congratulated, even if you don’t want to be. I hope you can pull of this autonomous upgrade mechanism, which I am sure you and the maidsafe team can.

6 Likes

On a philosophical note, here we diverge. :slight_smile: I don’t believe this Aristotelean deep-rooted teleological understanding of purpose. On the contrary I believe it is frought with dangers. Luckily we aren’t building skynet just yet, but I am already acutely aware of the risks. Every set goal is susceptible to “perverse instantiation” (as Nick Bostrom would put it - I can strongly recommend his new book). The SAFE network will not be a superintelligence, but it does explicitly aim to be a collective intelligence.

Many examples already exist of collective systems with a set goal producing undesired side-effects; a key example is the world economy driven by a set goal “maximise your profit”; of course it was (Scottish :wink: ) Adam Smith who instantiated the cultural shift that selfish profit was also beneficial for the whole. For a large extent it has been proven true; our collective wealth keeps exponentially rising - but it equally has unintended side-effects, the main one being climate change.

I have reached the conclusion that the only goal we can set is no goal. I learnt this from my previous experiences in the ngo aid sector; countless people and organisations honestly intend to do good, and go in with well-thought out goals to help other people. While this does a lot of good; it also ultimately harms the people it is trying to help. My best experience is still when I went for one month to Kinshasa with no agenda, just to live with a host family. Setting no predefined goal is the only way to be open to what is really out there.

As a third philosophical argument, evolution itself has no set end goal. Even thinking of Dawkins’ Selfish Gene, this is not a teleological goal. Evolution is a tautology; not-evolution is logically impossible. So in the end, I believe we should not set a goal; it is almost presumptious to think anyone could. We can only guide evolution.

6 Likes

But here it is assumed that any averse effects of a new update are apparent in a relatively short time frame. A new exploitable weakness may not be activated for a long time. Or in the case of SafeCoin-related algorithms (farming rate and such), economic effects of them may be beneficial on the short term but harmful on the long term.

I believe that in theory the network can measure it all, but the question is if it can measure it in time, before unacceptable damage occurs. I don’t think we want to go through multiple collapses of the network until a generation evolves that is finally stable.

2 Likes

@BenMS, you should read the following book, if you haven’t already read it…

amazing in its logical thinking and conclusion.

@BenMS,

While I am VERY attracted to @dirvine 's vision, I think his and your very interesting comments here are probably deserving of their own philosophical discussion thread.

Regards,
Phil.

3 Likes

Another Goldratt quote: “Tell me how you measure me, and I will tell you how I will behave.”

It is very easy to measure the wrong thing and get behaviors that don’t end up serving the system as a whole. “Continuous improvement” for example is very popular, but in the end most of it is a waste-- Only improving the constraint in the system ends up adding any value to the entirety of the system.

I like the idea of competing strains of clients. You may have 4-6 different versions each striving to maximize performance in certain aspects of the network… Each would have its own strengths and weaknesses, and if the network veers towards one purpose or another, some clients will be more useful than others… So long as you have variety, the network can evolve towards whatever is needed…

But in order to allow mutant clients you need to build a robust immune system.

3 Likes

This is only true if the credentials required to “login” have not changed.

Also if the safecoin “registry” for the client has not been changed to a point where it does not recognise and/or trust the old “coin registry”

We cannot be sure at this time that simply downloading the new client will mean the 5 or more old version files kept for the client are valid/trusted anymore.

All I am really saying is that any upgrading has to allow for the import of any old formats for the client data.

Or am I missing something?

1 Like

Mathematics is luckily one of the most stable discoveries/inventions at the disposal of humanity; so I think that regardless of upgrades to the implementation, the initial (and subsequent) set(s) of mathematical rules should always be supported, regardless of the version update. If at any point, any of such fundamental rules would be broken (xor space, hash name calculation, credentials) then that is a different, new thing - no longer a successor to the SAFE network.

2 Likes

thanks, but I’ll stick to one philosophical post per year. Wait till 2016 :wink:

3 Likes

Okay, here’s an alternative that seems simple yet effective, but may need further discussion.
Credit to @Seneca for giving feedback on this idea.

  • Assuming client/vault binaries are pulled from the SAFE Network.
  • Assuming GET request popularity can be measured in a reliable way.

Then SAFE has the ability to provide both: vote measuring per event (via GETS) and autonomous updating for non-voters.

Non-voters - Autonomous Updating
These are people who opt-out of voting for whatever reason. They set their client/vault to pull the “most popular” binary. Even those with older versions will auto-update based on the most actively used client/vault. This allows the inactive community to “roll back” to a previous version. Default threshold could be set at 51% or 80% for conservatives.

Voters - Human Choice Update
These are people interested in affecting change and hopefully understand the new binary. They set their client/vault to notify them whenever a new binary is PUT on SAFE. And have the option to select the new binary.

This solution ONLY applies to backward compatible binaries. Hard forks must create a new Network to avoid disruptive conflicts. If effective, this provides a user friendly option to evolve SAFE.

6 Likes

Is this really a problem? Or better: how was thas ever a problem? Bitcoin is Opensource, but the majority of users didn´t read the code nor would they understand it if they would. (I certainly wouldn’t) They trust on other people. This IS a problem because we delegate decisions by trusting others instead of doublechecking ourselves, but it´s also the way how people always used to deal with complex issues: find a logical argument who can be trusted and who not. Usually doesn´t work out perfectly, but overall it works pretty well since the potential for manipulation is reduced to a minimum (particularly in comparison to closed source).

The solution presented by @dyamanaka sounds good to me. I´d love to have this kind of feature for steering inflation/deflation.