I was thinking of this and don’t remember it being brought up previously. If there is a relevant older discussion, someone please link it. It may be too early to comment, but it seems relevant in deciding how one would first decide to spin up a node:
Will it be possible to maintain the status of a node if one needs or simply decides to change hardware and continue operating?
Say I’m operating a successful node, having attained Adult or Elder status, and I find a fault in my computer that makes me think it may fail at some point in the future. Do I just need to let it run till the computer fails, or can I port the node to a new hardware?
Similarly, if I’ve got a node being run from home, and somehow the national laws change and make me at risk somehow operating a node in my location. Do I have to decide whether or not to shut it down for personal safety, or can it be ported to different hardware, say in another jurisdiction?
Similarly I don’t remember but there’s something about the way that a node being unavailable for a short time, should not loose its kudos. So, a powercut for 10mins or a glitch in ISP, perhaps should not lose the good nodes - which would provide opportunity to switch… though raises a query how the network manages being presented with twins.
This is all related to a “key selling attack” and how to mitigate that. There’s some discussion, but I am keen to first get the network up and running.
So simple key selling attack
Publish website wanting to buy Elders or old Adults
Folk sell their key
You start as their node
Do that several times and you take over a section.
This is why right now, when nodes are unresponsive they are demoted. Also why nodes don’t have access to consensus keys (these are volatile and never written to disk)
However, VM selling attack makes all that more difficult to mitigate. Then the above is replaced by just giving the VM control to the buyer.
So a lot to consider?
However much of recent work is going into making Elders only agree on events, but never create them and this is the route we need to ultimately take. However, though nodes being unresponsive still need to be punished. So moving a node may cost you half its age or at least some penalty. We cannot expect folk to wait on a node reconnecting to get agreement and when your node is off were in a danger zone having lost 1 of the 2 possible nodes before a section breaks, but that’s another angle we are on. How to repair a section, but even then should you be able to break a section and then have no penalty?
A deep question all round, but first stable network.
This brings up a question. What happens to people whose ISP changes their IP address regularly. One I had was 24 hours the IP address changed. Although my current ISP does not change the IP address unless router is turned off for a number of days. Same for years.
Will the node get rejected if their IP address changes?
Another goal of the QUIC system was to improve performance during network-switch events, like what happens when a user of a mobile device moves from a local WiFi hotspot to a mobile network. When this occurs on TCP, a lengthy process starts where every existing connection times out one-by-one and is then re-established on demand. To solve this problem, QUIC includes a connection identifier which uniquely identifies the connection to the server regardless of source. This allows the connection to be re-established simply by sending a packet, which always contains this ID, as the original connection ID will still be valid even if the user’s IP address changes
At the moment no, due to us making the consensus key volatile for security. If the key is accessible then humans can get it and they do bad things Seriously though it’s us saying, if you run the correct code you don’t get the key, but if you run byzantine code you of course can.
Maybe in the long term. From the quic perspective, the key is a session-based connection ID and again is not exposed.
This is my first stab at my current design plans for how I plan to do safe farming. The hope is to allow for ever expanding storage of the posix mount point in real-time, if need be across N nodes while continuing to run a single instance of safe daemon.
I am still trying to soak in the specific details by others in this thread on how safe daemon’s reputation will be penalized, and in general how it will react when the daemon is restarted on another physical node but still being able to see the same view of the file storage system (Safe vault files) and re-use the same WAN IP, right before the crash, and re-spin up.