SAFE network upgrades

I just realised you could then imagine running your vault binary off of the network. If working for MaidSafe has not already exposed me to every imaginable interpretation of “bootstrapping” this is taking it to a whole new level :smiley:

7 Likes

YES, and the reason I am ask for his thoughts on the matter. At some point we have to have a decision on the validity of any upgrade and also should we allow each user to decide to accept an upgrade. One of the huge complaints about WIN10 is the automatic upgrading feature, and for very good reason. People wish to make informed choices about upgrading, even if that choice is do it automatically.

Except it cannot write its own upgrades, so any upgrade is external and humans can be crafty and fool any internal checks for writing in misleading code which does more than it claims. We need humans to check and thats why its open source. But at some stage the code has to introduced to the system.

Oh anyone can fork it, improve it etc. But how does it get checked before introducing it live to the system??? This is the crux of the problem, not the forking/improving by anyone.

Vaults become microkernels :wink: Fixed code with definable rules loaded at bootup, running in sealed computation units, verifiable with zk-snark type logic. It’s been discussed a few times (Brian has been chatting about it to). Not so far fetched really :slight_smile:

5 Likes

Actually your story before this was like an updated version of some sci-fi that for-told of networks, but they were main-frame networks (microprocessors didn’t exist when these were written in the 50’s). Now we are seeing it become a reality that is greater than the writers of sci-fi imagined.

But the part I quoted is the meat of the answer. That provides a lot of confidence that it can be done.

I was attracted to this project because it so wonderfully make real the idea-sparks I had back decades ago (before PCs) of where the networks we were creating would lead to.

Sir you are to be congratulated, even if you don’t want to be. I hope you can pull of this autonomous upgrade mechanism, which I am sure you and the maidsafe team can.

6 Likes

On a philosophical note, here we diverge. :slight_smile: I don’t believe this Aristotelean deep-rooted teleological understanding of purpose. On the contrary I believe it is frought with dangers. Luckily we aren’t building skynet just yet, but I am already acutely aware of the risks. Every set goal is susceptible to “perverse instantiation” (as Nick Bostrom would put it - I can strongly recommend his new book). The SAFE network will not be a superintelligence, but it does explicitly aim to be a collective intelligence.

Many examples already exist of collective systems with a set goal producing undesired side-effects; a key example is the world economy driven by a set goal “maximise your profit”; of course it was (Scottish :wink: ) Adam Smith who instantiated the cultural shift that selfish profit was also beneficial for the whole. For a large extent it has been proven true; our collective wealth keeps exponentially rising - but it equally has unintended side-effects, the main one being climate change.

I have reached the conclusion that the only goal we can set is no goal. I learnt this from my previous experiences in the ngo aid sector; countless people and organisations honestly intend to do good, and go in with well-thought out goals to help other people. While this does a lot of good; it also ultimately harms the people it is trying to help. My best experience is still when I went for one month to Kinshasa with no agenda, just to live with a host family. Setting no predefined goal is the only way to be open to what is really out there.

As a third philosophical argument, evolution itself has no set end goal. Even thinking of Dawkins’ Selfish Gene, this is not a teleological goal. Evolution is a tautology; not-evolution is logically impossible. So in the end, I believe we should not set a goal; it is almost presumptious to think anyone could. We can only guide evolution.

6 Likes

But here it is assumed that any averse effects of a new update are apparent in a relatively short time frame. A new exploitable weakness may not be activated for a long time. Or in the case of SafeCoin-related algorithms (farming rate and such), economic effects of them may be beneficial on the short term but harmful on the long term.

I believe that in theory the network can measure it all, but the question is if it can measure it in time, before unacceptable damage occurs. I don’t think we want to go through multiple collapses of the network until a generation evolves that is finally stable.

2 Likes

@BenMS, you should read the following book, if you haven’t already read it…

amazing in its logical thinking and conclusion.

@BenMS,

While I am VERY attracted to @dirvine 's vision, I think his and your very interesting comments here are probably deserving of their own philosophical discussion thread.

Regards,
Phil.

3 Likes

Another Goldratt quote: “Tell me how you measure me, and I will tell you how I will behave.”

It is very easy to measure the wrong thing and get behaviors that don’t end up serving the system as a whole. “Continuous improvement” for example is very popular, but in the end most of it is a waste-- Only improving the constraint in the system ends up adding any value to the entirety of the system.

I like the idea of competing strains of clients. You may have 4-6 different versions each striving to maximize performance in certain aspects of the network… Each would have its own strengths and weaknesses, and if the network veers towards one purpose or another, some clients will be more useful than others… So long as you have variety, the network can evolve towards whatever is needed…

But in order to allow mutant clients you need to build a robust immune system.

3 Likes

This is only true if the credentials required to “login” have not changed.

Also if the safecoin “registry” for the client has not been changed to a point where it does not recognise and/or trust the old “coin registry”

We cannot be sure at this time that simply downloading the new client will mean the 5 or more old version files kept for the client are valid/trusted anymore.

All I am really saying is that any upgrading has to allow for the import of any old formats for the client data.

Or am I missing something?

1 Like

Mathematics is luckily one of the most stable discoveries/inventions at the disposal of humanity; so I think that regardless of upgrades to the implementation, the initial (and subsequent) set(s) of mathematical rules should always be supported, regardless of the version update. If at any point, any of such fundamental rules would be broken (xor space, hash name calculation, credentials) then that is a different, new thing - no longer a successor to the SAFE network.

2 Likes

thanks, but I’ll stick to one philosophical post per year. Wait till 2016 :wink:

3 Likes

Okay, here’s an alternative that seems simple yet effective, but may need further discussion.
Credit to @Seneca for giving feedback on this idea.

  • Assuming client/vault binaries are pulled from the SAFE Network.
  • Assuming GET request popularity can be measured in a reliable way.

Then SAFE has the ability to provide both: vote measuring per event (via GETS) and autonomous updating for non-voters.

Non-voters - Autonomous Updating
These are people who opt-out of voting for whatever reason. They set their client/vault to pull the “most popular” binary. Even those with older versions will auto-update based on the most actively used client/vault. This allows the inactive community to “roll back” to a previous version. Default threshold could be set at 51% or 80% for conservatives.

Voters - Human Choice Update
These are people interested in affecting change and hopefully understand the new binary. They set their client/vault to notify them whenever a new binary is PUT on SAFE. And have the option to select the new binary.

This solution ONLY applies to backward compatible binaries. Hard forks must create a new Network to avoid disruptive conflicts. If effective, this provides a user friendly option to evolve SAFE.

6 Likes

Is this really a problem? Or better: how was thas ever a problem? Bitcoin is Opensource, but the majority of users didn´t read the code nor would they understand it if they would. (I certainly wouldn’t) They trust on other people. This IS a problem because we delegate decisions by trusting others instead of doublechecking ourselves, but it´s also the way how people always used to deal with complex issues: find a logical argument who can be trusted and who not. Usually doesn´t work out perfectly, but overall it works pretty well since the potential for manipulation is reduced to a minimum (particularly in comparison to closed source).

The solution presented by @dyamanaka sounds good to me. I´d love to have this kind of feature for steering inflation/deflation.

I think that the most questions about updates will be about new persona’s. What if I don’t want to compute anything because I’m already heavy on my CPU? Great if it get’s added to the Client and Vault, but I want the opportunity to not join that part of SAFE.

Now another idea, some team of enthusiasts want to build a little blockchain on SAFE because they think it would be awesome. So they create new persona’s doing just that. Now we have 2 camps, 1 that says: I don’t want this and the other saying: yes, it’s a great new thing. How do we go about that? Vote? leave one group unsatisfied? Here might be a solution:

People running the Client/Vault can check or un-check what they want to join or not. So the basic persona’s are always on, nobody can stop acting as a Vault-manager or something. But you want to join computation? Check the box. Wanna join a Blockchain? Check the box. So, next to 5 or so basic persona’s you can actually decide which others you want to join or not. So maybe, when 12 million people use SAFE, only 100K of them are testing out some blockchain thing, 1,2 million are joining computation etc.

Another thing to add is that a lot of stuff can be build as an App on SAFE. And I also don’t think that we have the same problem as Bitcoin. We don’t have a scalability problem. Just like there are no big fights over BitTorrent. When stuff just works it works and people use it.

6 Likes

I don’t understand this. Here’s how I’m reading it…

Let’s assume most people default to this, and a relatively small percentage want to make the upgrade or not decision manually. So we start at v1.0, and 90% have the “pick most popular” setting, and 10% want to choose.

v1.1 comes out, and it is so great that nearly everyone in the “choose” category updates to v1.1. We now have v1.1 at 10%, and v1.0 at 90%.

I don’t understand how the second group ever shift off v1.0, unless the “threshold” e.g. 51% refers to the number of “choosers” on a particular version. I guess that must be what you mean, so this means that the number of choosers becomes a centralisation risk, subject to potential attack yes? Or… :smile:

Sorry about that,

The exact way GET votes are counted is to be determined. And we should discuss further, if we go down this road.


Here’s one way I imagine it working.

Event 1 - Genesis
SAFE Vault v1.0 (MaidSafe) … starts GET request counter.

Event 2 - New Source Added
SAFE Vault v1.0 (MaidSafe) … restarts GET request counter.
SAFE Vault v2.0 (Another Dev) … starts GET request counter.

During Event 2… both sources start at 0 and compete continuously. So yes, the “active voters” will initially swing the GET counter when a new binary comes out.

But a non-voter can opt-in “vote” at any time. Why would they do this?

  • Their friend recommended the new version.
  • Their current version breaks.
  • They prefer features from the other version.
  • @polpolrene’s example of one version contributing PC computation while the other doesn’t.

Once the non-voter decides to vote, they GET the version they want, thus casting their vote for that source. They can also “roll back” to the older source for the same reasons above.

How does the non-voter auto-update decided if the threshold is not reached?
If no source achieves their threshold (51% for example), they must “actively” vote/choose which source they want to run. Hopefully they review it before deciding.

Event 3 - New Version Added
SAFE Vault v1.1 (MaidSafe) … counter reset to zero because a new version is PUT.
SAFE Vault v2.0 (Another Dev) … counter reset to zero because a new version is PUT.

In this example, MaidSafe updated from v1.0 to v1.1 which caused the GET counters to reset. Since voting is ongoing, spamming new versions won’t matter. Active voters will follow the developer they like and trust anyway.

What about GET manipulation to swing the poll numbers?

  1. Spamming GETS, are already mitigated by caching.
  2. The poll is ongoing, so they will have to keep doing it.
  3. Users can “actively” vote regardless of what the poll says.

I think a strong community will watch out for each other and warn of any nefarious attempts, before, during, and after a new source attempts to disrupt the Network. We might end up with 50% actively voting in this case. :smile:

3 Likes

The decentralized maintainer federate.

Establish a maintenance federate composed of members voted upon by high rated vault owners (that together hold more than 50% percent of content on the network (the network itself monitors this)) and core developers. This can take place yearly. Throughout the course of the year and tallied by the network on December 31st. Each voter must submit the public key of the candidate in order for it to be valid. The elected then plug their private key into the network to be validated and assigned the temporary role of yearly maintainer. A minimum of 5 people must be in this position with a later determined maximum.

When critical upgrades (or any for that matter) must be done, a message is broadcasted to the entire maintainer federate informing each of the proposed upgrade. Each review it in a decentralized discussion board and modify it in a decentralized GitHub, then among themselves vote (2/3 quorum or 66%) on it’s application to the network. Everyone on the network is then informed of the update even if by default their client is set to auto update, and is also given a chance to revert to the previous iteration in case the upgrade leads to catastrophe. The network could even be set to revert to a previous build automatically or prompt the user if after self testing it determines that a serious issue in the network exists (i.e data reliability, reach-ability, etc.)
.
This is all just a rough idea :slight_smile:

Happily point out the flaws, but please be sure find solutions for them as well.

I really do hope the above helps. I know it seems reminiscent of a system we’re trying to diverge from but with the decentralized nature of the network and a few well thought out tweaks, I think it can work. For those who want forks that exist simultaneously, they can opt to have both simplified (by default) and advanced statistics about the other network on their dash board. If statistics are comparably better for the second network for a significant period of time, the client informs the user of this with an option to migrate their data to the desired network.

2 Likes

I would prefer to have simultaneous software forks on the same network. Seems there is plenty of redundancy (Each file saved on several different vaults) Some clients are going to be better at certain persona tasks than others Having variety will likely increase network performance. (The strongest link always wins whatever race)

You would need an immune system that rejects clients that do not follow prescribed protocol behaviors.

2 Likes

Seems perfectly reasonable. Though some forks could possibly break vault compatibility. Their is also a chance that vault owners might not want to support simultaneous forks running on their pre-established vaults due to potential damage of previously stable data retention. Too many eggs in one basket:)

1 Like