MaidSafe Progress Roadmap

Do you know how much funds they have, what additional funding rounds they may do, what partnerships they may forge, etc, during this period? Without knowing this information, how will you use a roadmap to derive any level of funds?

Maidsafe is a private company and chooses to share a great deal with this community. They have outlined what alpha 3 and 4 entail and are working to deliver them in 2018. They give weekly updates on their progress towards this. This is very open and useful as it is.

I would welcome a better presented roadmap, but it isn’t the tool to answer the questions you are posing.

5 Likes

I’d be really surprised if they delivered both alpha 3 and 4 in 2018 but extremely pumped if they did! My uneducated guess is that alpha3 3rd qtr 2018, alpha 4 mid 2019 and beta 2020. Don’t hold me to these timelines though :grin:

1 Like

Exactly! And how many white paper crypto projects out there have any kind of proof of progress updates, in such an honest, transparent none hyped fashion like MAIDSAFE has?

For me the proof is in the pudding, and this project delivers pudding every Thursday.

This is a technology that is actually getting built. The road is being mapped as they go. To be sure, the destination is well known, only sometimes an unexpected curve comes along that no one knew was there.

Yet the bus is going to arrive soon, just a few more curves to go.

14 Likes

I’ve always known proper data chains implementation would be a major hurdle from the moment it was announced. Routing in general is a wicked undertaking. Securing it well enough and ensuring post deployment updates is a mammoth undertaking for a decentralized network with such harsh but necessary node punishments.

Network wide cascade collapse due to an update to routing/data chains is a very real possibility. Some update delay will likely be necessary to prevent rapid node state change. Ideally updates should be designed to require a few days to take full effect IMHO.

2 Likes

i’m pretty sure maidsafe has given that protocol upgrade some thought :wink:

and if i had to guess i would say an update will be activated after the close group voted on it and decided to accept it

1 Like

Consensus isn’t the worry. It’s the result of the update. A lot of thought went into node age and the simulations revealed some major issues. Real world results could impact things dramatically. If failure were to occur the network needs to recognize it as a design error and revert to the previous state while retaining all data. Recovery is fairly trivial given sudden node loss but a new routing or data chains algorithm in place after restart it must somehow be recognized as the cause of failure before “deciding” to revert. In a decentralized network it could be difficult. If the event is appended to the local data chains as it occurs while taking note of the failure type, the network could restart already aware of the need to revert to the previous pre-update network state.

Some loss/relocation/node punishments threshold might need to be established to make sections consider the possibility of an impending network collapse. An easy method would be for sections to self monitor after an update. If rapid node relocation/loss occurs leading to some unfavorable outcome before a certain amount of churn events, then upon reinitializeation nodes revert to their previous state automatically. Just a thought…

Yep, even for a non techo like me, I think Alpha 3 will be the “proof of the pudding”. But that’s what this project is all about. Innovation at its best

3 Likes

It may seem like that, but this is the kind of thing we do a lot. So the overarching design is considered and goes through RFC, but then we do simulations and tests to work out the implementation details. Its unusual to redesign the feature, but very common to then design the implementation components which always means tweaks. It is very normal to do that. On occasion we discover we need to do more as we simulate implement features.

So I feel it is like designing a large network of known parts. That goes like this

  1. Scope of work [ ± 20% cost/timescales ]
  2. Detailed design [ ± 5% cost/timescales ]

In our design though these parts are different but similar

  1. RFC [ ± 20% functionally correct ]
  2. test / simulate / detail Jira tasks [ ± 5% functionally correct ]

We see similar in rust RFC process but they focus more on doing step 1 and 2 in the RFC thread itself which generally includes code all through to merge into nightly etc. In both of the above though the time between 1 and 2 can be a long time and that time is less known when designing things that have never been done, like SAFE, but even in known things like country wide networks the time can be pretty large.

10 Likes

Timescales isn’t my worry if I’ve given you that impression. This is a roadmap thread so I’ve digressed. Forgive me. It’s disaster recovery in the face of upgrades that concerns me.

2 Likes

Bingo. Nonlinear, multi-disciplinary, creative and technical work that has never been done before. Combine with that the financial/marketing requirements, the fact that the tech is just plain challenging to do right when it comes to all the security needs, and the obvious observation that MaidSafe has probably the highest standards of all other competitors in crypto space. If you try to add more deadlines, stress and/or report requirements things will just progress slower. They need their space-time. I’m pretty amazed they even do the weekly dev updates (which are great). Forum members need to enjoy the journey. If this was being done in academia it would have taken at least 3x the time to accomplish what they have done so far. They can’t really just bring hoards of new devs into the fold either, since sometimes the fastest way to slow a project down is to bring in more “help”.

@Stark : based on my understanding, team MaidSafe is working towards automatic lossless data recovery in the event of a (short term) global catastrophe or a total network reboot via datachains and other routing mechanisms. These abilities will/may/would/could also allow power users to have a local safe network inside the global safe network, so they could keep their personal data cache closer to home for when their internet connection goes down from time to time. Routine upgrades seem pretty minor in comparison and there is a forum thread dedicated to discussing ways to see those through that you should post some concerns to if you don’t think they have been considered already.

3 Likes

Actually its pretty easy to program routing tables to converge quickly in the scheme Maidsafe is devising… its more like a few minutes worst case and in IMO I see it like inverted and OSPF like as far as convergence is concerned to get everyone on the same page…,

A good bio-mimicry analogy may be in fact Orange Forest Slime Mold

image

1 Like

Imagine an update were applied to each node after consensus. Section malformation results in rapid node relocation or loss. This happening too rapidly could make it difficult for the data chain to record these events and recognize them as potentially catastrophic.

I propose that upon update consensus, a sort of check/restore point be added to the data chain block for each section. If after a predefined number of churn events occur not resulting in system failure, then the network is to accept the update as stable, if not then revert. Seems simple at first but consider sections that are terminated before appending this restore block. In this case upon restart these sections would have to consult several neighboring sections about the the pre-failure cause. Their path forward would depend on the responses and the number of queried sections. If this section receives too many responses from other section that are also without a restore block, then it continues operating in a post update manner. No good.

With this proposal of a monitoring phase, I believe updates should come in a certain order: When update propagation commences, sections should first acknowledge the update, add a restore entry into the data chain, apply the update, then accept the update as stable after a number of churn events has occurred. This means SAFE must keep an updated benchmark of it’s performance and security to ensure continued improvement. Else it might move backward. This requires some thought I’ve yet to see thoroughly discussed here. Would be nice if we could start taking a crack at this before the need for implementation. @maidsafe @neo @tfa @mav @digpl others?

3 Likes

Updates would be manually applied by the vault operator until such a time that automated updates can be developed as David suggests. Manual updates means that updating could take months and I am confident that any such issue would be seen before any section lost data.

For Automated updates the update will have to prove itself before being adopted by the network. So I am confident that your suggestions or similar or better will be incorporated before automated updates occur.

2 Likes

Then at what point does the network begin functioning under the new parameters? Will two network states run in parallel? Manual updates will likely be applied by those savvy enough to realize the benefit. What if the majority fail to update? Synchronicity is essential to stability in a decentralized network. Auto updates is the only sensible route for critical components. How should we go about it?

1 Like

Yes. Remember SAFE is a set of protocols and will HAVE to be able to run with 2 versions at the same time. Even automated updates would require this as it is absolutely impossible to update all Nodes at exactly the same time.

Take for example we are still running V4 and V6 addressing schemes on the internet and all work fine at the protocol level.

6 Likes

I don’t think this can work. If data is reverted, this includes safecoin payments and then there is the problem of the matching rewards got from outside the network (Bitcoin, altcoin, physical property, …) that are not reverted.

2 Likes

Not everything changes after an update. For one it could be designed to revert without touching transactions. If the network fails, safecoin transactions will be the least of our worries if proper recovery doesn’t occur. In addition, parallel states could cause storage and bandwith overhead that could again cause failure.

1 Like

I think so too. Whenever there is an external effect, reverting would lead to possible inconsistencies (between network and external state).
Compensatory actions, to be applied upon the corrupted state, could account for the network state and (potentially at least) also external effects. So there would only be one way: forward.
But the complexity of it is staggering when thinking about cascading effects through the network - without smart solutions anyway.

But I must admit I currently feel like watching the universe through the eye of a needle when reasoning about this. I.e. there is so much I do not know.

2 Likes

From a layman’s perspective is it accurate to say that Alpha 1 and 2 have already rebuilt the existing internet from the ground up with security as the main driver as opposed to an after thought? And this laid the ground work for Alpha 3 and 4 to make it completely secure and anonymous? If true, I think non technical people will really get a sense of the amount of work that his been done and gives it kind of a wow factor.

14 Likes

Yes and this is why we have had both test and community nets with vaults at home in the past. They just didn’t cope well with high churn (from people leaving/joining often), which alpha 3 and 4 are looking to solve with data chains, disjoint groups, etc.

I think maidsafe could present a more glass half full outlook! :slight_smile:

5 Likes