How do you update an autonomous network?

Full disclosure: I’m an amateur programmer at best.

Launch happens, everything is good then x amount of time goes by and a critical bug is discovered. All of the ‘seed’ servers have been switched off, the network is functioning as its own autonomous collection of nodes - how do you continue to develop a core network under those conditions?

What do you use to patch that network?

My brain just can’t figure out how something that is actively developed can also be autonomous. I also can’t figure out how you prevent other people from just pushing code at the same mechanism you use to patch the active network.

Thanks and take care.

3 Likes

Not sure if I’m addressing your concern here, but maidsafe is a piece of software that runs on your computer. I believe it will be updated like any other software.

I work with code, but am not officially a programmer, so take this with a grain of salt.

I’ve thought about this as well, how do you update the core software on all nodes? I suspect each node always looks for updates and when an update is released, it propagates throughout the network. This will take time for some nodes (and supernodes) that are farther away, but it’s way more secure than trying to get the average node farmer to manage this task.

Perhaps some MaidSAFE developers can elaborate?

1 Like

I’m not one of the developers. It is a very good question and I don’t know the full details, but I understand that a novel mechanism has been developed to handle this in a decentralised way.

Part of this is for any update to run alongside the existing code and have to prove itself before it is allowed to take on responsibility. Obviously its more complicated than that, but that’s about as much as I know. I’d like to know the details too, but the team are busy right now so I’m very selective with my questions!

3 Likes

Ah hah! After searching the forum for quite a while I managed to find this post:

I knew I read about the update mechanism somewhere. So basically, the network recieves an update (supposedly through a public share) that is cryptographically signed by the dev team, but before it actually applies the update, it will create a test vault to see if the update is good. The update must be at least as efficient as the code before the update.

9 Likes

Thanks for finding this and typing it up!!

1 Like

David Irvine had spoke about this in another thread I’ll need to find for you where he as far as I remember mentioned pushing a sacrificial node to the network that upon good ranking (or better than average) by other vaults over a random amount of time, would then be acceptable as an update. But I also believe this is something to be integrated in the future.

1 Like

Who has the ability to push an updated node?

Isn’t it then possible to make efficiency upgrades while introducing small amounts of code that overtime, would create a exploit tool chain or something of that nature?

Thanks for all the replies and really I’m with everyone else, this project, if successful, is a game changer.

1 Like

That’s a good question. One I’m sure has been thought of by the team but I always appreciate insight as well. I guess what it comes down to is what is ranking based on. Speed or how data is handled? With the right kind of ranking it doesn’t seem like an issue.

Oh and I believe maidsafe foundation retains that ability for a short period after launch to push updates and then later on to safe pods.

That thought or sequence of thoughts has been on my mind for a long time. I just couldn’t articulate it. thank you. Also coming from a non coder background. Are there really any non coders? Another discussion. But yes its like asking how you update a virus. We know they evolve but what if you need more?

Seems like a period of vulnerability or comparatively more vulnerability until that more robust mechanism is in place? Or maybe the second mechanism is just more efficient?

1 Like