SAFE is fast enough now for its critical use case- release it now!

There has to be some downtime allowed, otherwise there’s no way to update the system the node is running on. Also that approach reads like runtime kernel patching, which is error prone… (you could quite easily serialize the data and deserialize & migrate to a new schema in the new executable)

The actual update seams to be rather easy. The problem lies in the consensus of what version should be used. Like a global crdt Map<SectionId, VersionThatShouldBeRun> that contains the desired version for each section. After n% signal support for the newer version it switches to that version… synchronized in some way…

I would rather signal support for specific features (same way bitcoin does, or rust with its editions :slight_smile:). The node software has support for that feature already built in, but disable until there’s consensus for it.

One advantage with it being protocols is that a version number can be included in the headers.

This allows newer versions to understand older versions and if possible still use them and talking to a previous version node down grade the packet to the previous version. Usually this is done in the packet assembly routine and is not a overhead in time and very little in code. The version of the node would be included in the connecting to the node handshake.

Thus as long as the older version is still able to be used then the older node can be tolerated. Maybe it does not age if too old as a way to encourage the owner to upgrade.

Also there will often be upgrades to the code without upgrades to the protocols. For instance many IP4 stacks (code) have been through numerous upgrades to improve performance and abilities, without any change to the actual packets/protocol. These mean that the other nodes do not need to be as concerned with an old node. Although I would include with the protocols such things as ageing protocols and the like which are not strictly a change in any packet format.

The network could not really run without nodes being able to tolerate old nodes on the network. If there were ever to be a need for a major change that excludes that happening then I’d expect there would be a interim update before the massive change that basically can work with older versioned nodes and the new version. The sequence of upgrades is to upgrade to the interim version and flag this as an urgent upgrade and wait for a period of time. Then do the upgrade to the new version that is incompatible with the version before the interim.

7 Likes

Fair enough about patch and upgrade and not just fork. But are you suggesting that what David set out to do prior to the release of bitcoin and the crypto fad is now recognized to be not possible without the token (coin)?

In the past when I’d put forth the concern that the coin was a distraction the answer was the network was possible without it. At this point I am basically hostile to the coin even if the concept has been improved by calling it a token . I can’t see myself using the network if the coin (token) is involved. I don’t want it. I get there was token ring and all that, its not new lingo.
It just doesn’t seem trust worthy if its got any of this-speculating- metering-toll roading-PTP-drm-enclosure- top down vibe. Plus all that stuff needs much higher speeds to be competitive with clearnet for entertainment but how do you get that over ISP owned hardware designed to run the clearnet already carrying the clearnet?
Also I straight up do not care at all about entertainment as that’s like moving from vital to frivolous.

My hope is that the real version of SAFE is already out there running for the people who really need it. Seems like there is a lot less tech suppression suddenly. For ever they’ve been trying to suppress recognition of quantum effects at meso scale, but that seems to be crumbling. So much orthodox scientific or scientistic dogma seems to be crumbling at the same time propaganda is rampant.

The average virus is about the size in lines of code (if I am not mistaken) as DOS or 10k lines of code. That is what we needed- viral freedom. That was the promise. Secure Access For Everyone- that wasn’t about divide or enclosue or lock out, that was about inclusion.

No, I don’t think its funny. A secure private means of communication widely distributed or easily accessible is needed. We need that much more than a new internet. It seems that functionality within reason at reasonable speed has been proven even if not perfected. But a coin and all that seem like bloat and distraction and speculation- really taint. There seems to be this claim that the network cannot scale without a lure in a coin now called a token to make it less contentious like token ring or something.

Layman’s questions: are upgrades not the weakest point of attack in the network? Who exactly will be upgrading the network from now into eternity? Could an upgrade be malicious, like one that breaks nodes?

3 Likes

I think having upgrades running in parallel on what David calls “sacrificial nodes” and if the code runs as expected, with no bugs, and is provably better than the previous version to then be accepted, should be viable.

I wonder though if something malicious could be hidden or like a Trojan horse after updates are complete across the board. Not sure how possible that is.

There is always an option for adding the requirement to stake a large portion of SN to be able to upgrade or to vote on an upgrade but I think Maidsafe will do something in typical Maidsafe fashion that is catered to the unique design of the Safe Network that will feel natural and suitable for all participants. Perhaps even upgrades are quite autonomous if going the sacrificial node route?

Will be interesting to watch that design challenge be tackled. Team always excites and impresses. :slightly_smiling_face:

4 Likes

I am intrigued by how malevolent node software can infiltrate, what it could do and how to defend against this.

Suppose all nodes updated to a malevolent version, it could do anything. But this is true of any decentralised system so I’m guessing it’s a solved problem - through open source code, signing by known/trusted developers, reviewed and validated by known/trusted developers. How do other projects do this?

That seems about it really, if you can defend against that you’re good, because for such an attack to work you need to get past the 51% threshold (though I think this may be 33% for Safe) and to do that in a world wide network means beating the code validation process.

People not running validated code are playing with fire and will be few and far between, so a threat to themselves but not the network.

6 Likes

Hardware doesn’t buy itself. Neither does power or bandwidth. Good will will only get you so far and then things need to be paid for.

Some folks will just want to give just enough, in exchange for what they want to use. The token enables that. At the other extremes, some people just want to pay for what they use and others want to just earn tokens and not use the network at all.

Just expecting the network to be sustainable without these incentives seems very hopeful to me. I doubt there are enough people as passionate about it as you to keep it going. There needs to be something more than good will and charity behind it, imo.

9 Likes

A quote from the Network Fundamentals page on the website:

The project to build the Safe Network has, at its heart, a set of objectives that are still as vital to meet today as they were when we began back in 2006:

  • Allow anyone to have unrestricted access to public data: all of humanity’s information, available to all of humanity.

  • Enable people to securely and privately access their own data, and use it to get things done, with no one else involved.

  • Allow individuals to freely communicate with each other privately, and securely.

  • Giving people and businesses the opportunity for financial stability by trading their resources, products, content, and creativity without the need for middlemen, or gatekeepers.

Which of these would you consider essential for the Network to fulfil when we launch it?

4 Likes

Agreed, this world is not known for its good will. Humanity as a whole are a bunch of greedy Bastards. Without incentives nothing goes beyond the experimental stage…

Think if the token as it was originally intended (and still is for me). Folk provide resources, they get a certificate or proof they did that. Then the network gives them resources in return. You use the certificate to prove you stored XMb so you can claim YMb storage. All cool.

Now if you run a node you get a cert so can store data. What about folk who cannot run a node though?

So transferable certificates was the idea, you sign your cert over to somebody else. All good.

In 2007 we came up with an electronic currency, but it was frowned upon in the UK (bad men only do such innovations etc. etc.). So bitcoin happened in 2009 and we said there it is, transferable certificates and it’s out, so let’s get back to our original plan and make it easier for everyone.

So SNT happened.

24 Likes

I’m getting nostalgic; it has been a long time since we had a epic Warren rant.

4 Likes

Everyone of them, thank you.

1 Like

David. Ok, so it was the original vision. Can’t beat that for context. Thank you.

3 Likes

Thats my thinking too. But I am sure that the upgrades would include signatures from core developers who act as custodians (of the code, not network). And these people would move on and others replace them at times. A node would not accept an upgrade without the signatures and I hope there is some way to delay updates so other people can verify the code first.

Also there was thought that the update would be run by the previous version to check if it did anything strange and not update if something was found wrong

5 Likes

Yeah, i don’t see a problem with that if there’s a mechanism for nodes to rejoin there section.

Backwards compatibility is also “easy” (newer nodes reading older messages), but forward compatibility is hard (older nodes reading newer messages). If you have a message, that is not just a simple “newer node sends a direct message to an older node” (the newer node is able to just produce a message in an older version), but involves signatures etc. that are collected from multiple nodes and then send to multiple nodes (as are the most messages). There’s no way to support a newer data schema and be forward compatible at the same time.

(forward compatibility is even harder for crdts, that’s because you need to process all fields in a message and merge them properly with your local data. You can’t just ignore unknown fields, doing so would cause dataloss that is then propagated/gossiped to other nodes. Im working on a crdt password manager and had to drop the idea of forward compatibility cause it’s so hard to implement.)

How is the support (or more importantly rejection) of a change coming from node operators/SN users respected? Is MaidSafe going to say “here is an update that has feature X, it is going to be enabled at timestamp Y” and everyone who’s not in favor of that update is going to be kicked of the network? Or if the majority is against it then the nodes that enable it are kicked at timestamp Y? (“timestamp Y” as a general trigger for an update… no notion of time in SN…).

What’s missing for me is a consensus mechanics for how features/protocol changes are going to be enabled network wide.

1 Like

When an upgrade cannot allow previous version nodes to work since they cannot send the correct info then that is a case for the interim version situation.

There would be different classes of upgrades which may include but not limited to

  • Simple code efficiency and/or bug fixes. No protocol processing/packet changes other than protocol “stack” improvements.
  • Some protocol processing/packet changes to allow additional features when these features are not deal breakers for using previous versions of the protocol. E.G. say adding a field w/processing to improve routing from section to section, but the nodes can use the old method still.
  • Upgrading token balances from 64 bit to 128 (or 96) bits. Old nodes cannot handle the new 128 bit balances and thus incompatible as soon as one balance it looks after uses the full 96 or 128 bits.
    • In this case a interim upgrade is needed. But for this type of upgrade then the change is done but not actually used. During a upgrade (maybe the 5th after) any old nodes that are too old to have the code will be excluded. Basically the later upgrade removes those old nodes from compatibility list.
  • protocol changes that are incompatible with previous version.
    • again this can be done similar to the coin balance example where the protocol processing/packet is upgraded many versions before it is enabled or perhaps mandated.
  • And then the emergency upgrades to fix a exploit or other critical bug.
    • this will require an interim upgrade that disables the problem from hurting the network and works with both new and old version. The later new version will have all the fixes in place and eventually take over from the interim version as nodes implement it.

So there are a range of upgrades. It would be hoped that once the network is out of infancy that the emergency upgrades will not be needed.

Still the devs who are more knowledgable with the code will make up their own list of classes for updates and how each class will be done.

Can’t really suggest anything here other than version number and that being included with all connection handshakes will allow the network to quickly find out the latest version.

Maybe also including in the connection handshake the latest verified version number that has been seen by the node.

3 Likes

I still hate the idea of network assisted/forced binary updates.

  • Binaries can’t be reviewed properly (source code can). Binaries are platform depended (unless it’s WAsm). Just let the node operators decide which code they are running on their nodes as long as it’s protocol compatible.

  • Automatic evaluation of binary updates seems like a pipe dream to me, at least until there’s a artificial general intelligence, that can read and review the generated machine code with all its consequences the same way a human would read the source code.

2 Likes

At least with safe the upgrade code sits on Safe, cannot be modified and thus once its verified its good to be trusted.

Cannot expect people to compile the source code though. We want people to have reliable & secure one click install and upgrades

5 Likes

+1
This is critical in a decentralized network

1 Like