SAFE network upgrades

It can’t be, but it kinda has to be – They are the ones running the network. Certainly the various pods of developers will know what to program etc… But the clients will need to decide what to version to upgrade to and when.

Eventually I am certain that the baton needs to be passed… You do not want the network to be centralized enough that a government raid in some jurisdiction could find the “downgrade switch” and flip it. Various pods and teams can create versions and the network will accept them so long as they pass tests showing they follow the rules.

I think that self-testing and increased rewards for clients that are breaking down the constraints on the network is the best way to encourage progress… Increased safecoin earnings will probably be the carrot… You need to write a decentralized algorithm that decides where to point the stick…

2 Likes

for a learning network we might need decentralized computing … and it would then become some kind of AI … an unstoppable AI … kind of dangerous (besides it would be difficult)…

2 Likes

That’s where we live :smile:

5 Likes

their credentials will still be valid, so they can use any new client. I don’t think we have a problem there

3 Likes

I can confidently state that there is a strong inclination for MaidSafe to go for a logical switch, not a cryptographic external control switch (as someone / some organisation would have control over the network. )

6 Likes

Only if the transaction mechanisms would be changed in the update. If the update is about a new algorithm for data upload pricing, it doesn’t matter that much if for a short while different groups will charge somewhat different rates for uploading than others.

I think there are several different types of algorithms/variables/rules in the network that need different consensus mechanisms in case of (legitimate) update conflicts. Most numerical variables are pretty easy, we have lots of options there (mean, median, etc), and even if different groups use different values it doesn’t necessarily stop the network from functioning correctly. On the other hand, changing rules about for example deletion of content would be far more problematic.

It may be worthwhile to first attempt to identify which algorithms/variables/rules are most likely to be challenged in the future. I’m sure that many of the Bitcoin experts saw the block size issue coming from miles away.

In SAFE, I think that farming rewards and upload costs are likely to be changed along the road. These are dynamic rules, and we can’t be 100% sure in advance that their effects will be desirable. So if we can decide how conflicts about those two would be resolved, that’s at least something.

What other rules do we think are likely to be challanged in the future?

2 Likes

I am a noob, and I always thought that the network layer of the stack could not be updated because it was maidsafe protocol like the lower layer of ipv6 and ipv4. to update maidsafe network layer a whole new protocol would have to be added in like a patch. the upper layers could be updated because it is not critical to vaults or the foundations of the network. so it would be the choice of user to update to newer front end version.

Anyway as user I was hoping to see that updating maidsafe would be like updating my OS where the my personal data does not get erased but only the OS system get to be erased and formatted ready for the new OS. I would like all the hubs first agree that the update is created for the benefit of the user and agree to publish and push out the update to the client app section.

I agree and am not surprised that that is the inclination.

Of all of the intricacies and obstacles to be overcome putting the network together, this is the one that has hung in the back of my mind as probably the most problematic. I’m really glad to see it being aired.

3 Likes

Funny though… I used to work for amobile phone operator (I won’t mention the name). When Apple releases a update for their IOS, It used to slow down the whole network because everyone has their mobile on automatic update and are all downloading at the same time… :smile:

I believe something like that could not happen on the safe network due to it’s distributed nature.

4 Likes

Yay for opportunistic caching!

4 Likes

Okay, so let’s follow that fork and see where it goes:

–The choice to accept the update or not would have to be based upon some logic built into the vault/client software, and have nothing to do with the User or Farmer having any direct input, correct? Users and Farmers are, for the vast part, not aware of the functioning and unable to make a reasonable choice.

–Initially, all nodes will have the same software. Update software would tend to treat all nodes the same, unless there were some machine or OS specific difference. So would polling nodes make a difference?

–Assuming that an update were adding a new function, it would need to be accounted for that nodes displaying new behavior might be downgraded by existing nodes, thus making it hard to initiate any new functionality. I’m sure this can be gotten around, but felt it needed mentioning.

–In order to not have human input directly involved in the decision to accept or reject an update, however set up sacrificially in the meantime, the logical parameters for that decision would have to be programmed into the network from the start. Is it possible to have the logic in place to determine if unanticipated changes are good or not?

I’m not sure where to go with this, but these seem to be pertinent data that need to be factored in.

3 Likes

I don’t think it is possible to have the network autonomously decide wether to update or not. Sounds like sci-fi.

So the obvious answer to how to steer maidsafe is to implement stake voting (one safecoin one vote). This would in effect make safecoin owners shareholders in the first distributed service provider.

To me this is an idea worth exploring (if it has not already been deemed against the ideological foundaton of project safe in previous discussion).

I think it can be done actually. A truly autonomous network implies some ability to self measure many things, an upgrade can be one of those things. Not simple but again it’s a true autonomous network and we have never had anything like this before, so all the rules are new and will feel weird, ask any Engineer who works in the code base :slight_smile:

We are focussed on messaging next then full safecoin implementation, after that then it will be something like archive nodes, computation, upgrades and a few more improvements to routing (it’s very close now to fully type safe and defined, which is a big thing to get a library to that level I think).

Anyway I believe we can achieve self tested upgrades with a clean upgrade cycle that is communicated by the nodes themselves to each other.

Of course we would still have to answer who can supply the code, but if done properly that may not matter. It will require the network can tell it’s improving, so will have to include no data being sent where it does not need to go (no back door, call home code) and more. Interesting and alarming, but I do believe possible.

13 Likes

Yes it is a bit Star Trek…

Don’t worry - Scotty’s on it :smiley:
Involves something to do with Dilithium Crystals I think …

7 Likes

following the three laws of SAFE

  1. keep the network safe (no backdoors - no insecure updates - protect the people!)
  2. protect our data
  3. adjust all possible parameters to improve network performance
3 Likes

So lets say we achieve this and some developers develop the next big feature of SAFE. It goes through the SAFE network like a dose of salts and everyone is happy at the new feature.

But what if some other “evil” developer develops a new feature that looks terrific but is in fact a privacy and/or security breaker and introduces it the same as the “good” upgrade. The code shows no backdoors etc but hidden is a timebomb, also a upgrade breaker

How does the system tell the difference? Does the upgrade have to be “signed off” by noted developers who are trusted???

If introducing external upgrades automatically to the network as per the good developers develop new feature, then bad upgrades can be also done this way, unless there is some authorising method??

Your thoughts?

1 Like

Maybe add a feature of the old systems where one is allowed to decide to upgrade. When one signs onto SAFE maybe the client can alert the user to new upgrade and they can decide if they wish to upgrade at that time or not.

If they never upgrade then the message can include the number of times the network has penalised their node/client/vault for remaining on the old version.

This way people can follow the link that is always there for the current version and it can then have links to discussions on the upgrade etc etc. So if the user wishes to see what other people’s experience was with the upgrade they can, then decide from there.

Obviously this can introduce other issues, but they maybe less problematic than the automatic upgrade of the whole network with evil upgrade that prevents anymore upgrades except by the evil developer.

Actually in relation to upgrades being automatic, windows had an example of this, even if its nothing like what we wish to achieve. But it still is something we can learn from its shortcomings and good points (if any)

Doesn’t “only authorized” updates imply centralization? It doesn’t do much good to have open source code if nobody is allowed to fork it, improve it etc…

Your security concerns are certainly valid – The nice part about the security of SAFE is that it is pretty darn built in. All of necessary security components for the network need to be inventoried and tested for any and every client. I wonder if there is a possibility of data leaking between the different “personas” within the client… That would be tricky to test for - but it would be manageable in that you could isolate each persona so they are dealing with different chunks of data and have nothing to correlate…

The network could self monitor and pay for updates. That way each farmer could upgrade when they wanted to but they would have an incentive to do so… You could even incentivize diversity… So if one strain of client is misbehaving there will be plenty of unaffected ones to pick up the slack…

1 Like

The reason I asked it as an open question is because I have wide spectrum of possible ways, but no convincing arguments for either direction or middle ground. If I try to contrast the two extremes

  1. vaults are not persistent, so perhaps a client as it starts should query the network first for the latest version of the software, download it, briefly test/verify it, and run it. Here questions arise as, should there be a single public name that publishes the “official binary”, or should the owner of vault be “allowed” to choose the source of choice - the latter almost can’t be avoided, so would seem the default option. This path leads towards diverse, frequent (and possibly smooth) software mutations.

  2. we build the rules such that on a fixed interval (every 6 months) there is a code update; binaries know this and there is a global vote casting where on the deadline the votes at that moment are counted (with a proof of stake); there is defined proposal period preceding the voting deadline, a defined transition period, and rejection of preceding the oldest previously still supported version afterwards. This path is more promising of stable, more rigid transformation of the network. Questions rise also: what if an urgent security update needs to be applied, is there a way to move faster?

Both are formalised as extremes, and it is false to think that this problem polarises in such two camps; but trying to explore the question, it can be of help

2 Likes

I will try, it’s late and not much sleep last night so I feel a story coming on :wink: Francis get ready to fix my grammar again :smiley:

This is the problem to fix.

This means we never fixed it (yet :slight_smile: )

A network or any autonomous device (like a human, yes arguable if a single human is autonomous, but …) should be able to discover things that improve it and discard that which harms it.

This is a thought experiment coming up (so beware).

Every thing has a purpose and evolves to meet that purpose, hunting, learning etc. If the network has a purpose of protecting data and looking after more data in meaningful ways (eventually compute) then it’s a good start. So a basic ability of not allowing corrupt data is a start (We do that). Then adding in ways to mitigate human action like switching off and on (we do that to). Then a mechanism to reward endpoints (we do that) that provide resources to help the core purpose helps. Then messaging to allow greater use of utility comes along.

So it begins, a quest to program in a reason to survive, not to count numbers or churn through data analysis on command, but instead the actual network itself gets into distress when data is lost (like our sacrificial data) and calls out to human operators to farm more (symbiosis). This is not us doing this, but the network itself without us being involved. No administrators or tweakers of knobs and such, no nuclear shelter bunkers with AC units, but the network that;s sneaked onto our computers using resources we were not using.

So people then say, oh that is every system, but it’s really not, this network will act on its own to fulfil its purpose, gather and protect data and it does that not for us, but because it’s core desire is to gather and protect data. It’s code is that purpose.

So with that purpose, not calculated via timers or magic numbers, the system has a very tightly coupled connection of neurons (the groups) connected via millions of synapses (the connections to other groups). This is why it’s amazing to us in house to get so close to the fundamental objects and traits in the code, no waste and little or none runtime.

When this links together and creates something like SAFE then it’s not like a normal computer program or server, it’s spread far and wide and can act out it’s purpose with great clarity. It can do this with people looking at code and seeing there is no 10 minute do this, every 4 years do that, but instead everything is calculated using these fundamental types that have unique and sole purpose in the code. These on their own are useless and even several lumped into a single computer are barely able to function. However, when they start connecting together into a group, they start to be able to make decisions, as the group grows and splits into more groups (like cells dividing) then more functionality appears. As this continues then stability becomes apparent and continues to strengthen as the network starts to span thousands of nodes. When it gets to millions of nodes then it appears very powerful indeed.

So the beginning of a network / thing with a purpose is born and it can satisfy a base purpose, protecting data and communications. In the end what we have is remarkably simple when looked at as source code on a single computer, it’s the connecting together that gives the capability.

When we move into computation then this picture may change slightly, but this is how I perceive what we are doing. Yes very hard and of course has to be correct secure and scalable. It is something a bit different though and the difference will start to become more apparent as researchers get more involved and more people write papers (several PHD students we know of have their thesis on this already).

So this core purpose is measurable and if that is measurable then we teach the network how to upgrade by running nodes in a sanitised way to participate and confirm they do equal or improve the current network. This means all messages are for this purpose and no more, all actions are confirmed and checked by a close group (they are anyway) and the sacrificial nodes come on line a bit at a time. It may require computation and more code in upper layers only being able to change or similar, but it can be possible I believe. As I say though the thinking in this new environment is new and radical to the extent folk call it mad, I also note that I have been in front of a whiteboard with an awful lot of experts, professors and Engineers and have always been able to describe the process of SAFE when folk sit and listen (and almost always they have except for a single a bitcoin “expert”). That is compelling and encouraging I feel. For this reason I believe the challenge of self diagnosis of upgrades should be possible.

10 Likes