Moving a node to different hardware/location?

I was thinking of this and don’t remember it being brought up previously. If there is a relevant older discussion, someone please link it. It may be too early to comment, but it seems relevant in deciding how one would first decide to spin up a node:

Will it be possible to maintain the status of a node if one needs or simply decides to change hardware and continue operating?

Say I’m operating a successful node, having attained Adult or Elder status, and I find a fault in my computer that makes me think it may fail at some point in the future. Do I just need to let it run till the computer fails, or can I port the node to a new hardware?

Similarly, if I’ve got a node being run from home, and somehow the national laws change and make me at risk somehow operating a node in my location. Do I have to decide whether or not to shut it down for personal safety, or can it be ported to different hardware, say in another jurisdiction?


Similarly I don’t remember but there’s something about the way that a node being unavailable for a short time, should not loose its kudos. So, a powercut for 10mins or a glitch in ISP, perhaps should not lose the good nodes - which would provide opportunity to switch… though raises a query how the network manages being presented with twins.


This is all related to a “key selling attack” and how to mitigate that. There’s some discussion, but I am keen to first get the network up and running.

So simple key selling attack

  1. Publish website wanting to buy Elders or old Adults
  2. Folk sell their key
  3. You pay
  4. You start as their node

Do that several times and you take over a section.

This is why right now, when nodes are unresponsive they are demoted. Also why nodes don’t have access to consensus keys (these are volatile and never written to disk)

However, VM selling attack makes all that more difficult to mitigate. Then the above is replaced by just giving the VM control to the buyer.

So a lot to consider?

However much of recent work is going into making Elders only agree on events, but never create them and this is the route we need to ultimately take. However, though nodes being unresponsive still need to be punished. So moving a node may cost you half its age or at least some penalty. We cannot expect folk to wait on a node reconnecting to get agreement and when your node is off were in a danger zone having lost 1 of the 2 possible nodes before a section breaks, but that’s another angle we are on. How to repair a section, but even then should you be able to break a section and then have no penalty?

A deep question all round, but first stable network.


Is this true ALL the time or only when the network is in its early stages and sections are at their smallest?


True right now, but we have many aces to play after stable network.

1 Like

This brings up a question. What happens to people whose ISP changes their IP address regularly. One I had was 24 hours the IP address changed. Although my current ISP does not change the IP address unless router is turned off for a number of days. Same for years.

Will the node get rejected if their IP address changes?


quic has a config for this where there is a pre-negotiated changeover that is done securely.


Brings up another question. The change if IP address is usually unknown until something shows that it changed. So cannot be negotiated prior to change. Does this work after the change?

DHCP lease expiration can be predicted.
Are there other methods for public IP assignment?

From WikiPedia

Another goal of the QUIC system was to improve performance during network-switch events, like what happens when a user of a mobile device moves from a local WiFi hotspot to a mobile network. When this occurs on TCP, a lengthy process starts where every existing connection times out one-by-one and is then re-established on demand. To solve this problem, QUIC includes a connection identifier which uniquely identifies the connection to the server regardless of source. This allows the connection to be re-established simply by sending a packet, which always contains this ID, as the original connection ID will still be valid even if the user’s IP address changes


Thats cool. Does it work for IPv4 <=> IPv6 changes too?

Right now, for us, it won’t matter as we will be ipv4, but it should in theory.


Glad to hear this. Has been a question sitting on the backburner.

1 Like

The question is: can we make a node redundant by adding a failover node ready to take over in case the first one crashes?

@dirvine, your initial analysis of “key selling attack” risks indicates no, but your Wikipedia quote about QUIC connection identifier indicates yes. So the answer is not clear to me.


At the moment no, due to us making the consensus key volatile for security. If the key is accessible then humans can get it and they do bad things :slight_smile: Seriously though it’s us saying, if you run the correct code you don’t get the key, but if you run byzantine code you of course can.


Maybe in the long term. From the quic perspective, the key is a session-based connection ID and again is not exposed.

So all possible, but none coded like that yet.


Why not just run two nodes because you don’t then have one lying idle? You’ll need some more bandwidth, but that will be paid for through earnings anyway.

1 Like

A redundant node is useful to minimize risks of losing age and status (adult or elder) if it crashes.


I understand that, but why leave one idle? If you run both I you get redundancy and double the earnings until one fails, after which you’ve still got one earning and the second can start again.


The backup node won’t be idle because it has to duplicate the master node data.

But this is a digression, my question was if a redundant setup is possible at all (and response seems to be not yet). Reusing the backup node for another task is another topic.

This is my first stab at my current design plans for how I plan to do safe farming. The hope is to allow for ever expanding storage of the posix mount point in real-time, if need be across N nodes while continuing to run a single instance of safe daemon.

I am still trying to soak in the specific details by others in this thread on how safe daemon’s reputation will be penalized, and in general how it will react when the daemon is restarted on another physical node but still being able to see the same view of the file storage system (Safe vault files) and re-use the same WAN IP, right before the crash, and re-spin up.

1 Like