Secure Self Update

Long-winded way of saying “connect to the SAFE network and check for new versions”. Presented in RFC format but really just meant for discussion.

Secure Self Update

  • Status: proposed
  • Type: enhancement
  • Related components: vault
  • Start Date: 24-02-2020
  • Discussion: TBD
  • Supersedes: None
  • Superseded by: None


Software components of the network (such as the vault) require periodic updating. This can be done automatically and directly from the SAFE network and should not require interaction with the old internet.


  • The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.


An autonomous self-healing network requires updates to add security patches or new features.

Currently upgrades are done using the self_update crate which allows the vault to download new versions from sources such as github or AWS S3.

The self_update crate requires use of the https protocol and a dependency on OpenSSL, which introduces large amounts of complexity and legacy that could be eliminated if this crate is removed.

The changes proposed in this RFC would improve the ability to audit the vault codebase and therefore improve the security of the network.

The short term goal of this RFC is to remove the OpenSSL dependency and the self_update crate.

The long term goal of this RFC is to enable autonomous governance and feature discovery for the network.

Detailed design

The protocol for updating is potentially extremely complex. In a way, the update process determines the governance of the network, so it’s an extremely powerful and sensitive aspect of the network.

  1. The vault software is started by the user.
  2. The configuration is parsed and the vault uses it to connect to the SAFE network (as a client, not as a vault).
  3. The vault fetches the list of available vault versions.
  4. If there’s a newer version
    1. The new vault version is downloaded.
    2. The updated vault is started.
  5. If there’s not a newer version
    1. The vault initialises as normal.

Configuration Requirements

provider: the root of the Name Resolution Service (NRS) for vault binaries. Initially this could be hardcoded as maidsafe but in the future could (should) be flexible. This is used to generate the standardised location safe://<provider>/vault/versions. This may also be a fixed xorurl which removes any dependency on NRS.


    "hardcoded_peers": [...],
    "self_update": {
        "provider": "maidsafe"


The vault will check (using NRS) safe://<provider>/vault/versions

The list of vault versions will be plain text of the form

<version>,<xorurl for vault binary>




Initially signatures were proposed for binaries, however this was not found to ensure any greater security from an attacker (see Attacks below). It may be decided that signatures are worth including, which could easily be appended on each row of the vault/versions list as an xorurl.

Starting the New Version

The vault might halt after having downloaded the new version, notifying the user to instead run the new version. If the old version is started again it would simply redownload the latest version and exit again, so it becomes unusable.

A more sophisticated approach in the future may be to have the vault launcher be only able to coordinate starting new vault processes and stopping existing vault processes, which would allow the newly downloaded vault to be started automatically by the launcher.

This may be further abstracted to have the vault load separate binaries for each feature, so individual features can be updated and loaded on-the-fly rather than bundling all the features into a single vault binary.

Future Directions

  • staggered rollout of updates rather than the whole network at once.
  • hotloading updates without restarting the vault.
  • separation of updates for features / consensus / etc into a more modular design.
  • voting mechanisms for standard / proposed / experimental updates, windows of backward compatibility etc.


Change the configuration for provider. This allows an attacker to download arbitrary data to the machine, and possibly execute it. A possible mitigation is to show the provider name before downloading, and require confirmation from the user (or a flag --no-update-confirmation to be passed). In theory a properly secured configuration file should be adequate to prevent this, but in reality this has proven difficult.

Change the configuration for initial peers. This allows the attacker to present a fake network to the user where the attacker controls the provider name, and the user will think they are downloading from the correct provider but they are not. This is equivalent to the prior attack because the configuration must be changed, but it allows the attacker to use the correct provider name so confirmation is not a preventative. It might be expected that signatures would help in this case since the attacker could not create a suitable signature, but the signing key would also be part of the configuration so would also be changed by the attacker. The only way to improve this is to have the signing key be separate to the provider/peers configuration, which seems impractical in an autonomous update situation.

Losing control of the NRS name or the list of versions allows an attacker to update vaults. This is an attack on the vault binary provider rather than the consumer. In this case signatures may be useful to include, however there must be strong separation from the signature key and the uploading key. If the NRS account is compromised it seems reasonable to assume the signing key may also be compromised. This is open to debate and possibly a standardised process for releasing vault binaries may be beneficial for provider security.

In summary, attacks come from altering the configuration of the vault operators or controlling the provider.


Since secure self update only requires client level features (ie no vault or consensus features) it could be used by any software that can communicate with the SAFE network, such as SAFE-browser or SAFE-api.


This success of this approach depends heavily on either a) hardcoding values into the binary and securing the binary or b) securing the configuration file. However this seems true of any self_update style of feature.


The current alternative is to download using self_update crate from the existing https internet.

A hybrid approach may be possible.

A more community-driven approach may be possible where users source their own updates and do not depend on the vault automatically updating itself.

It may be desirable to add some kind of web-of-trust type features.

Reproducable builds may assist the security of the vault ecosystem. The relation between source code and binary is important but is out of scope for this RFC.

Unresolved questions

Should the process be known as update or upgrade?

Fault detection and rollback may be a significant aspect of this feature, but are not addressed here. This is especially tricky since some new features may be ambiguous, maybe seen as faulty inclusions or maybe seen as desirable inclusions depending on each particular vault operator (eg bitcoin segwit).

What can we learn from existing systems such as bitcoin and IPFS?


How do we control the updating of this resource. To me a cracker would be looking to put in there a malware version.

Who controls this resource? This is more about humans rather than the network.

Since a vault requires an account to place funds in (eventually) then providing the user with information in their account would be a good indicator they are on the correct network since their account blobs only exist on the correct network. I am assuming no one is capable of duplicating all the chunks in the good network or even come close.

Yes, maybe a number of SAFE:// resources must announce the update making it more difficult for an attacker to get control of a (?super) majority of them. This could come from the current core developers.

Maybe there can be a supervisory module that has basic understanding of what the modules should be doing and alert the user if they break certain rules.


Yeah this is a good question. I guess it could be a multisig append operation? Whatever the case, the concept is similar to how safecoin-is-secured-data and data-is-secured-data and similarly updates-are-secured-data. Having a single everything-is-secure-data philosophy is kinda neat (the SAFE equivalent of unix ‘everything is a file’).

Designing for ‘this particular type of data is super secure but that other type of data is only normal secure’ wouldn’t be terribly sensible. So whatever mechanism is in place, it will need to be universal to the benefit of all data on the network.

Having it initially be maidsafe is a bit like a presidential system - what they say goes - and I think that’s very healthy and desirable for a young network.

But the way the feature is designed it allows advanced users to easily change over to their own personal release system, maybe to incorporate tweaks in the early stages.

Eventually those users may become publicly well known and start being used as a source of binaries for others as well. This begins to distribute the responsibility a little and in a fairly organic way.

The fluidity of the provider configuration is actually pretty neat, and should hopefully evolve a lot like a liquid democracy would. You can vote on issues how you see it (ie build and release your own vaults) or you can delegate someone else to vote on issues on your behalf (ie use their releases). It’s very close to a ‘real’ representational democracy in many ways.


Just to update on this point, we recently got a PR accepted to self_update to avoid this dependency and use rustls, which we’re underway upgrading to in our crates :+1:

Just to throw something in, we may want release channels (alpha,beta ) as part of update rollouts too. To allow for more testing etc. This has been a great addition for our electron apps, so baking something like this in to the RFC would be grand.

Perhaps easily ‘found’ initially via safe://<provider>/<app>/<channel>/versions, default to above stable scheme when no channel is provided.

Updates in a given channel can get any update from that same channel/version or more stable. (Alpha apps can get alpha/beta/stable updates from same version or above, beta from beta or stable…)


Thanks @mav, very interesting and useful.

This effectively means the binary provider is in control of the version being executed, which may or may not be desirable. Probably desirable early on, but it makes some of the attacks you mention more effective, whereas if some users are allowed to be slow to update it makes it harder for those users to be attacked before awareness of a new attack spreads and can be mitigated.

If this was put in control of the user a periodic reminder ‘new SAFE vault version available, do you wish to update?’ would be needed, but this is a familiar pattern these days so I think would be acceptable.


Something I’ve always wondered is how the individual vaults are going to handle backwards compatibility breaks in the data structures / formats stored.

How would we deal with a scenario where the data stored on a vault is no longer compatible with the network as a whole? Will there be some sort of data upgrade tool?

How will we deal with rollbacks? For instance a scenario where an update fails for an unexpected reason and the network as a whole needs to rollback with some nodes in a partially upgraded state?

How would we deal with a scenario where a subset of the vaults have upgraded and a subset haven’t yet, and they’re trying to talk with mismatched network libraries?


In the short term, we won’t really. We will force updates. As we progress though this becomes a very interesting problem. Some old code will be insecure and will need killed off, but not all old versions will be insecure.

I reckon newer vaults will farm more effectively and encourage upgrades.

In terms of data types then this is also related to crdt and merging data types there. There are things like multiformat available, but that leads to a false sense of security. In any case right now there is no great answer, but it’s likely the only old data vaults will hold is immutable and that is easily up-gradable and self validating (the later makes the former possible).


Is this related to multihash or is it more / different than that?


It’s the same. To me an upgrade must handle much more than only that so the multiXXX is simply a path to upgrade, but not an actual upgrade. An upgrade should state the new version and include function and type upgrade, not only types. i.e. upgrades need to upgrade logic, regardless of types.


A tiny contribution to the discussion and proposal, not fully clear in my head though, perhaps the location safe:// could chosen from a configuration as suggested, but it still can go through consensus in the network/section.

I’m imagining (as said) a young network may be ok with all vaults simply upgrading from the safe:// NRS controlled by MaidSafe, but eventually users of a network (including some other networks, perhaps intranet-like, private networks, etc.) can choose a different safe:// location and get the network/section to vote. In this way, if a node gets attacked, other nodes should be able to either prevent that node from upgrading, or effectively kick it out of the network if it still proceeded and upgraded using that url…?..

From a user’s perspective this effectively would mean that the user would agree with the corresponding community of the network what’s the url to use for next upgrade, set it in the vault and the vault will try to go through consensus in its section to approve the upgrade, and notify the outcome to the user. This would obviously be possible only if the vault binary is not attacked/overwritten, as you’d need to trust such upgrade logic of the vault’s binary.


Not sure about this. What is the network agreeing on?

The way I see it is the network agrees on rules / behaviours, and it does that by detecting violations and reporting them.

I don’t think it’s possible or desirable to do this by enforcing a certain update location.

So in a way what you suggest (updates going through consensus) is already going to need to happen by default but not by consensus on the update location but consensus on conformity of behaviour of nodes to majority expectations.

Interesting idea, but overall I’m not sure about the merits of enforcing update stuff before-the-fact.

If it were me I’d run a version of the vault that looked like it did what the network expected with respect to updates, but in reality would hot-swap to my own custom vault behind-the-scenes. I really want to be able to customise my vault binary for ergonomic reasons.