Network updates - how will they be rolled out?

EDIT: moved from another thread

How will changes to the code be rolled out across the network? Let’s say, for example, that the algorithms controlling economics need tweaking or a vulnerability is found that needs patching. Will all nodes be automatically updated from a repo somewhere or will it be down to individual farmers to keep their setup up to date? I don’t recall seeing this discussed. Might be worth a separate topic, thinking about it.


Yes separate topic :wink:

But perhaps initially it will be that a new update will be added to github and the vault/node on starting will alert us to it. Then we download it from the link supplied and copy it over the top of current vault/node software (maybe automated by then and we just execute the update).

I expect that for quite a while there will be a manual part to updating so that people can decide to update now or later.


Perhaps having the latest up to date code could be a key part of PoR to encourage farmers to keep their nodes updated.

There is often good reason to delay updating (microsoft anyone). The reason could be that they will be turning off their computer in a couple of days anyhow and wait till then. Or they might be super cautious and wait for what others say about the update. Maybe the update is solving a problem that their O/S or computer doesn’t have and they don’t want to have unneeded down time.


Who decides what the updates will contain?

1 Like

My desire is that the network will. So a vault will grab an updated vault, run it in parallel and monitor it for better or equal efficiency to what it currently has (we need to define efficiency). If after a random time it seems OK then upgrade. Of course, this is much more complex than this explanation, but this is my hope.


Brilliant: Autonomous updating!


Without eventual auto updates the network could partition. That would equal data loss.

That, - and more chars

the Safenet code source definetely has to be hosted on Safe - this includes rust and / or other languages crates / libs / deps / whatever. Npm , cargo and siblings can not be trusted in their current implementation.

Yes you are right, forgive my brain fart.

As @Antifragile said, every update has to work with the older one. Read David’s post he says they will work on the same network. Its a protocol and the software talks to each other. The software version is not what defines the protocol, but the protocol version defines the method of interaction.

So only when the protocol changes does there become this issue of incompatibility and any sensible implementation will ensure that previous versions of the protocol are supported. For example TCP/IP v4 still works even though TCP/IP is officially at Version 6 now.

Yes APIs is another area, but I’d say that all the previous version will need to be supported otherwise in 10 or 20 years we get some classic utilities breaking because of unsupported APIs.


What happens if several updates are released and still no manual update is performed by the vault owners? As you said, updates are designed with backwards compatibility in mind. Though how far back. If ease of use and participation drive mass adoption to (i.e. average jane and joe), then there seems to be a possibility that this large constitution of non savvy unconcerned users will continue to run increasinly older nodes until there vaults are obsolete. In which case since they comprise the majority, a network splite seems inevitable. Am I wrong?

How long is a piece of string?

But historically protocols change little over time. The software’s purpose is to package data and messages over a predefine structure and respond correctly to received data/messages.

Often upgrades are to fix bugs or add a new feature and the protocol does not change. So then the previous version works just fine. And maybe 10 versions back without any specific consideration for supporting previous versions. Then add to that the upgrades will specifically ensure they support previous versions then it should be fine for many previous versions. But how many depends on circumstances. For instance there maybe a critical bug that requires the new version be installed by everyone and as such only the previous version maybe support and when enough of the network is at that new version then a another version is released that will not work with any version prior to the critical update. WHO knows?

A carefully staged update would minimise any such effect. And since the network can survive vaults leaving then people who do not upgrade to the critical update will simply have non functional vaults and a message telling them an upgrade is needed.

Obviously if updates are done without considerations then yes the network could die due to incompatibilities. But seriously not many products are like that if any I know of.

1 Like