Step-by-step: the road to Fleming, 5: Network upgrades

My first thoughts may be too simplistic or obvious or duplicating and rehashing things already mentioned, but I’ll risk it: The portion of code (tests) that defines “what” the network must do in the most critical and valuable areas such as protecting data and access to data forever, must be set in stone. “How” the “what” should be realized most efficiently, over time adapting to a changing environment, could then perhaps be left up to any developer, and the network which would accept or reject upgrades of the “how”. Even in case of a fork in the “how” code, it would be good if the two (or more) different versions would still be forced to (co)operate on the same data so that forks in the code would never result in splits of the body of data stored or accessible through each network version. The reason for setting core parts in stone is that no AI will be able to judge what upgrades will be in best interest to human beings under future circumstances that we cannot predict today. No idea what portion of Safenet should be considered holy immutable ground but at least some should be fixed to avoid the need of transfering data from one safenet to another, or having to make safenet snapshots or backups to be able to recover from a series of future upgrades that turn out to be malicious or just disastrous.
The US constitution and the Supreme Court may be a helpful analogy to above fixed tests and upgrades. Is it possible to isolate a minimal portion of safenet, and code a constitution for it that could stand the test of time and survive all imaginable exploits by generalized AI? Or alternatively, is it possible to replace a Safenet Supreme Court by a truly distributed alternative that could never be manipulated into destruction of Safenet?

Yay, yes please this is what we need. :slight_smile:

I think this is a boggling question (to me at least) so we should be particularly open to all questions and ideas. Although I hope we do that anyway here.

I understand the thought about setting core things in stone, and how we set these core criteria at all is my first question. For example: how do you measure ‘in perpetuity’?!

But on the setting in stone point I’m wary. I think we shouldn’t set an AI on a path with no possible way to change it later, a different tougher process yes, but not set in stone forever. Everything changes eventually, even a constitution.


Perpetuity: This is a great point. A target to span human civilizations would be great, but this seems out of scope; a whole other kind of problem. If we had a good long term storage medium, then we could consider how to use it with safenet.

This point also reminded me how terribly fragile and short-lived most human-made systems tend to be, especially involving software. Considering this I should be more than happy with a perpetuity equal one human lifetime.

Then I worry about the longevity of the economics that is supposed to make people maintain nodes, and second about technical progress; a single major invention in storage or communication could force everyone back to the drawing table to start over.

So, for now, we might be talking of perpetuity of no more than a decade or two? In that case, simply letting Maidsafe decide with user input, might be just fine. Stage 1. I got it. We can worry about decentralisation later, after launch, correct?

Edit: Durations to measure SAFE Network’s perpetuity against:
Civilization - 336 years - Minimum for preservation of knowledge
Lifetime - 89 years - My kids would have to worry, not I
Generation - 31 years - Would have to worry about my data
OS Life cycle - 10 years - Better keep main storage on-premise

1 Like

Dear David - just wanted to share that Microsoft just announced they are building their Decentralized identity on the bitcoin blockchain. However, they are working with standards set by W3C. SAFE can have its own ID but it is critical that is built around w3C standards as well. This will become critical for adoption. How cool would it be if DID created on Bitcoin (on w3C standards) works on SAFE network as well. and/or vice - versa. We want to make this easy for a user. We can keep the principles of security that SAFE lives by but also have interoperability with noted standards like W3C. Bigs firms will be working on making DID adoption easy and SAFE can ride that wave too in a positive way. Let me know your thoughts. Dont want SAFE to be sitting alone with its own standards and waiting for the world to come to it. Interoperability with w3C standard should be a must, I think : )


Yes I had read about these moves and upcoming standards. I like standards where possible, but only when applicable. Blockchain standards would be good, but I am not sure right now. The industry is moving fast and some really new tech (dgk, snarks etc.) and standard bodies don’t move that fast. So yes we need to be very aware of these, but we also need to launch quickly and we have something a bit different to blockchains. Lets watch this space though for sure.


Thanks David. There are quite a few companies coming up - all working on DID based on standards set by W3C. One is actually is Cambridge blockchain start-up. I think there mission summarizes it the best path forward (similar to SAFE) - “We envision a future where users have a lot more direct control over their personal data, and we also believe in open, interoperable networks”. I think interoperable is the key here. I think if SAFE IDs are developed based on W3C and/or provide interoperability with W3C standards based DIDs, I think a lot of other solutions/systems may also end up using SAFE network. Again like you said, without compromising on security and some of the other features for SAFE.

1 Like