Step-by-step: the road to Fleming, 1: The Big Questions: SAFE Fleming and Beyond

PS. On the topic of node ageing and (trusted) elders - what is being considered to guard against the espionage concept of sleepers?
rfcs0045 leaves things a little open - even its mention of the potential to use blacklists raises questions about whether this could be used as an attack vector.

Consideration of the Tesla Model would be useful here:

  1. Roll out the update to a small control group first. In Tesla’s case this is car owners who work for them. Analyze the feedback and make any necessary changes after a short period of time then re-release to the control group if necessary.
  2. Then expand the rollout to a slightly larger group, analyze feedback then, finally, send to the entire network.
  3. Updates should be voluntary for a certain period of time. After that time has expired the client will be required to update or be “orphaned”.

Not sure if this “controlled” rollout would be feasible in the Safe Network but maybe a variation of it could be implemented.

1 Like

The punishment of sleepers nodes is covered in the Datachain RFC.

Routing must punish nodes ASAP on failure to transmit a Link NodeBlock on a churn event. Links will validate on majority, but routing will require to maintain security of the chain by ensuring all nodes participate effectively. These messages should be high priority.

1 Like

Thx - though we might be talking at cross purposes as the type of sleeper I was thinking about is more akin to the discussion in the section on Archive nodes in the Datachain RFC).

These more reliable nodes and will have a vote weight higher than a less capable node within a group. There will still require to be a majority of group members who agree on votes though, regardless of these high weighted nodes. This is to prevent attacks where nodes lasting for long periods in a group cannot collude via some out of band method such as publishing ID’s on a website and soliciting other nodes in the group to collude and attack that group.

If nodes can get in and gain a higher level of trust over time then they have the potential to be more disruptive in the future if they go rogue (esp. with collusion).

Routing must punish nodes ASAP on failure to transmit a Link NodeBlock on a churn event. Links will validate on majority, but routing will require to maintain security of the chain by ensuring all nodes participate effectively. These messages should be high priority.

Whilst it makes reference to “Routing must punish …” it doesn’t state the form of the punishment. Can you point me to where the consequences are documented?

P.S. The hypertext reference from the word NodeBlock in the section of frcs/ quoted is broken.

1 Like

Will it be possible for the network to provide the update, while the nodes are updating?

In case I’m not asking that well… I hope that the updates will be available on the network… Could the network sort of roll over from <50% nodes updated to >50% nodes updated, all on the fly, while providing/hosting the update?

If updates happen automatically, would it likely be staggered? Something like 20% of the nodes at a time?

Well, this is inherent in the concept of Node Ageing. Only those who have demonstrated, over time, their good behaviour reach the status of elders and participate in the consensus. Of course an elder has a greater capacity to harm but, also, is much more unlikely that it will.
In the end, preventing a enough number of evil elders colluding in the same section is what will ensure the network.

This part is under development and the punishment, as far as I know, has yet to be defined (removal, age reduction, warning,…)


Thanks a lot for all the feedback, I’m really happy to see it :slight_smile:

Definitively a lot of good points and we are keeping many of them in mind during our investigations.
With this overview post, I did not go into so much detail, in each aspects.

One thing I tried to do is to sketch the challenges and not really discuss the solutions we are considering as we are still in the process, but also because each topic likely warrant its own post. It is also great to see all the direction this conversation is taking, and bringing new thought to our team.


I’m a really on-technical, non-code literate sort of a person, but I read a bit about the subject and I have tried NixOS lately. I don’t know if this will have any bearing on the issues you are discussing, but I have a feeling it may be worth reading as you consider possible update mechanisms for the SAFE network.


An interesting philosophical question would be to consider the following two positions:

A. in a decentralized network you trust no one
B. you have to trust someone to use their updates

Sounds paradoxical? The premise of update seems to suggest there has to be some central figure of trust - let’s say MaidSafe. But again, if we start to introduce this sort of requirement on who can trust, some niceties of decentralized network start to break down. If we stick to the principle and don’t do that, then it means anyone can try to advertise updates and the consensus algorithm needs to be able to figure out which update is ‘real’.

Another way of solving this paradox might be to build update as a mechanism outside the autonomous Network itself - e.g., you have to manually download the update program from somewhere, making sure it’s legit and then running it across your client apps. And of course this doesn’t sound very nice or safe.


OR we have to trust the network to tell us if the update is worth it. The update may come from someone we don’t know and therefore can’t trust but if the network can run the update in parallel, benchmark, test, then perhaps we can trust the networks decision. That is all easier said than done though :slightly_smiling_face:


What if you had to release a version that could not be updated or fixed outside the choice not to use it?

If there are time machines might be harder to seal the nework, will always be an arms race anyway?

Is a consensus lock good enough when picking that lock will pick all locks? Seems like a upgrade is a sealed fork (one that can"t be upgraded) that people migrate or somehow bridge work over to.

But how do we know we can trust any version asside from consensus double check of code plus time?

An update may appear to function properly, but introduce deanomyzing back doors that some government has forced a software team to include. Any updates will be suspect in that regard, which is why the bitcoin model is rather nice. The update is put out there for all to see and discuss, and is installed manually by those who feel it is worth it.

Manual updates should keep state actors from applying such pressure in the first place, since shenanigans will be quickly identified by the community (hopefully). Automatic updates seems like it would invite state actor attention to the software team(s).


Or we rely on a world wide network of developers to give their opinions of the update. You know decentralised. OR we do both The network code says yes and The devs world wide say yes

@lynx, I think you asked a question with the answer in mind and missed the alternatives.

Yes exactly. It may work better than previous updates but introduce backdoors or even malware, honestly I doubt any automatic system short of a full blown (not existing) AI could have a chance. Although it should be possible for the code to automatically give a tick for required functionality is better than current, but cannot be sure if some backdoor introduced by changing one or two lines of code.

And then we have NEW functionality. How is current software supposed to evaluate it? @Jean-Philippe


One way is a double update. So put in the new code in version 2, but that code is for when version 3 is available. So when we update to version 3 then the code is there to understand it. This also allows future upgrades to be know before they happen and also allows a version 2 node to understand version 3 data etc.

1 Like

And the issue is that a bad actor version could do this too. Under the guise of adding some useful functionality of course.

So how is the network code supposed to distinguish between the 2?

Are we then back to the decentralised developers giving their tick of approval for any proposed upgrade?

Yes, I agree, but I was answering the specific point of how to have the current network “see” new code in the future. That allows an upgrade path, but it does not have any effect on evaluating that code, yet :wink:


Unfortunately I forgot to add the bit about what I said above. Sorry.

1 Like

As Fleming will involve some basic version of data storage and farming, how we can get initial test coins ?


Well there could be a few methods. Maybe similar to the invite, but get a certain amount of test coins.

For alphas test coins will not survive past the alpha version and so gifting some coin so others not in the forum so they can create accounts should not be an issue or a problem to those on the forum to gift a fraction of a coin.

Maybe each vault started will be gifted a coin for being a node and then earn farming rewards.

Maybe by asking

Maybe per IP address

There are a number of ways and each will have some problems so it will be a case of working out the easiest for both the dev team and the users.

We will probably be told when or just before the alpha is released for testing.


We have got answer in dev update today:

1 Like