Balance between the time of launching and 100% security with no bugs

First, I want to be clear that I am speaking for myself here, and I am not directly involved in the core dev process. It might also turn out that the devs get it out faster than that.

I also think the devs are sincere in their desire to release as soon as possible and are doing everything they can to achieve it. But the lack of a precise release date also means there is still some uncertainty on the actual amount of work to get there and there is definitely a bit more to do than mere bug fixing.

To put things in perspective, Iā€™ve read Cosmin Arad thesis a few weeks ago, on Kompics, a framework he designed to make it easier to implement, test, and debug distributed systems. He spent 7.5 years on it working with the very bests in peer-to-peer systems in Stockholm and published the thesis in 2013. He is now working for Google. The time span of his thesis is roughly similar to the time MaidSafe spent on the problem and the work happened concurrently. Some of the topics he addresses in his thesis are NAT-traversal, and scalable key-value store. By developing a similar system to support self-authentication, decentralized storage, and a crypto-currency, MaidSafe is pretty much bleeding-edge. That explains the refactorings as possible simplifications are found along the way. This is a normal part of engineering when little people have explored the design space.

In the meantime, I do think we can contribute as a community to make the project a success sooner by picking things the dev do not have time to do and make them happen.

Please elaborate

I personally decided to help clarify the core working of the system by creating presentations and studying the system. The better it is explained, the easier it gets on David, who can focus on other things rather than reexplaining the same things over and over again. Also, the better the explanation, the more accurate the expectations will be on the actual capabilities of the first version of the network so less management of community expectations. Also, the faster it can get newer developers up-to-speed. All these things have a direct impact on the amount of focus the dev team can put to reach the beta launch.

I am inviting other people to think about other things the project would need that might be better addressed and step up to do it.

6 Likes

I think we all respect the efforts your putting in erick, the more folk of your caliber contributing the better.

Great background to Cosmin Arad, I guess David must have known about himā€¦too bad he didnā€™t come across.

The idea of doing the weekly dev updates has helped a lot in informing us how things are progressing in Troon.

ā€¦and in breaking news, David had some time off:

1 Like

I wish people of Cosmin Arad caliber would join in. But the problem I got in explaining the system recently was that it comes across as not really serious without a clear specification and proofs for algorithms for the distributed system part. The prof I talked to was worried MaidSafe might be reinventing the wheel or worst, be using algorithms that do not work. Someone will have to do a good first attempt at formalizing the algorithms before the research community starts to look seriously at the rest of the system. For the moment I think I am alone in that but I am no expert so I am trying to convince new researchers in peer-to-peer systems to pay attention and give me valuable feedback on the formalization of the algorithms. It turns out it is a bit harder than I expected to get profs to do since they are always too busy. I am having more luck with post-docs at the moment.

2 Likes

I think I remember your post back then on the GoogleGroups and your thread with David. And of course I didnā€™t point out the old name ā€œperpetual dataā€ my-bad. There are some historic videos up on Davidā€™s Youtube place from seven years ago, those were my initial videos, and before I found maidsafe or roughly about the same time and the whole Wuala early days, Davidā€™s presentation at Google TechTalk was a nice introductory, there was the very intriguing videos and also Van Jacobson back then with his PARC involvement and explaining new paradigms of networking (named data networking, content centric networking and all). Maybe it is worth noting, that since PARC and his NDN/CCN days, he has been lured over to Google as a fellow there, so guess whose be in for competition if this stuff really takes off there are plenty of huge corporations and giant players in this game, and I am uncertain what will become of the early pioneers such as Maidsafe or enthusiasts, if maybe everybody again falls for nice lies of the global players and jumps ship of the little groups and projects that originally had started this.

As for launching or not having out usable stuff for the public, I am also very uncertain, just compare the way of this project and paradigm to say the invention and inventor of the WWW, or even such projects as the GNU/Linux kernel and all. If they had polished the stuff ever since and never gotten out a release or not connecting even the early dots or just release things in alpha or whatever stage that would have been called, or not evolved continuously, I am certain that there would be nor world wide web today or no alternative Linux operating system. Sometimes you just need to release and use even tiny bits of your production and creation and build a new layer next year, or abandon faulty stuff again when its time to do so and a newer or better idea came around or an implementation proved to be better and so on. You cannot eternally sit in your basement and not hand out your tool for others to scrutinize and to use it. Wuala was the first to market and I used it and had plenty of problems but it evolved, and lately to the worse, to the very worse to be honest. But it is still a huge difference to have nothing real in your hands and only theory, it is something completely different. In my opinion the real ā€˜troubleā€™ or the real maturing will not happen in the labs at all of course it will happen and needs to prove and stand its ground when its out there being used and abused and misused by millions or billions of users and devices. I donā€™t believe in miracles and I try to think of experiences and technology that I have already witnessed myself. I wonder what this whole journey will look like in retrospective in say ten years from now or in twenty. You just need to compare what the early web-technology or the early Linusā€™ kernel looked like, what and how it ran and was designed and how it went on from there. I would have preferred to already have perpetual data released back then, and say have today Maidsafe net generation two or three by now. In the early videos and site there were claims that there was running perpetual data clients already and all, still even after the Github publication those early works never had been released or put up. Maybe folks had been interested and bootstrapped first generation or zero generation from those works and maybe we could have had competing ideas and implementations or faster usable results to the public. Of course I understand that those things David spoke about (investors, funding, meetings, results, constant oversight, etc.) were also contradicting certain alternative approaches. Anyway, the real pain and fun at the same time will only begin when there will be a real world user base to this technology and god knows what problems and changes in midstream or paradigm shifts further down the road the project and the idea will need or likely to take into account. What stuff will prove unfit for reality and only nice on paper and theory, or how networking, or devices, or bandwidth or use cases or peoples behaviour will stand in the way or change things later on. And then again donā€™t forget about the legislation and the leading class out there in all your jurisdictions and countries and politics and everything that is related. Compare it to how it affected the web and inet technology since their invention and what problems we are suffering from today or what obstacles we have already overcome or which paths were chosen for us or by us during all those years. I wonder if the project and overall idea would have attracted a similar developer and enthusiast community and gained much more momentum if it had been released much earlier in whatever state it had been.

3 Likes

I agree completely Andreas (and fully appreciate your patience over the years), the main issue we have and why we are taking the testnet approach is basically the system itself. As a decentralised network it either works or does not, if partially broken then the effects can be catastrophic (as no node has all info, they rely completely on other nodes). I think we are in a really great position now, the vaults for instance were completely re-implemented in that last two weeks. Compared with nearly two years of the previous code base getting to the same position. This is huge news.

  1. The current work is in tying in Crux with routing_v2 so where this is at the moment is, we have crux working without rendezvous connections (needed for hole punching). That is between 1 and two weeks to get in place (2 different Engineers say 1 week or 2).

  2. Routing_v2 connectivity is being finalised this weekend / Monday. This is where routing meets crux.

3: Next week we finalise the sentinel and message handling in routing_v2. This will allow testnet2 to complete as crux rendezvous is completed. I think much of testnet3 is already under-way.

4: Aware (bonjour/avahi) is replacing live port in crux (this allows nodes to find others on a local network). This is part of testnet3. 1 weeks work.

5: In parallel Niall has created a multi jail/kernel setup for large scale testing of nat traversal in bsd jails. This will be a great test.

The client side has dramatic forward momentum (as the core libs have refactored). This includes the launcher/ Restful API for data storage and retrieval and now the posix interface is being worked on. The collaboration with Mozilla seems to be progressing very well to, which is a really neat project to be involved with I think.

So itā€™s all go and I think with help from the community (as we provide installers) then we will have an early viable product (like a SpaceX launch tests) that gives us all even more visibility.

The recent findings are really important and allow us to add more functionality in a way that is hugely simpler than the previous codebase which was extremely complex. It also allows us to add in the security tweaks we need to make easily and importantly test them.

So I think we are getting to where you want to be (me too). The issue has been it is a decentralised network and requires all parts to really work well together, which makes early release very difficult indeed. It needs a larger community with eagerness and belief, we have that now and this is where the difference lies. Itā€™s not us any-more it is a much larger community who can and will take our test launches and make sure our wee ship does work and return safely to base.

Nothing will make me happier than to get your message saying - it works it works, and I am pretty sure that is now not far away and I think we can all see this now very clearly. I hope so anyway.

12 Likes

I should say we first showed the clients working many years back (in python) to a bunch of investors. The networking would not stabilise and speaking to python core devs there was no inclination to fix memory leaks (in python). We then chose to write in c++ (a few months after the Google conference). This led to a c++ rewrite (huge as we had a few python devs no money and no c++ experience. I boarded that plane to that google conference knowing I did not have the cash to pay the staff that month, which I did fix as always, but to give you an idea.) and hopeful launch on 2012, I then dug into the code to find the vaults had been written (by a single Engineer) in a manner that they would not have scaled at all well and in fact relied on sending pings all over the place very frequently. Then we looked at the vault again and decided to bite the bullet, I deleted the vault codebase and had the team start again with a more clearly defined design. This changed over the last 2 years and grew arms and legs as many problems were solved with more code. More recently this is the problem I have attacked and now I am much happier with it.

There is a golden rule in MaidSafe, if any member of the team lies I will terminate their relationship with us. I cannot tolerate lies, perhaps from lifeboat days, but lies can cost lives and worse. So when we had some bad marketing done in our name it was a horrendous time, to balance survival with what we could put up with. The Engineers ignored most of what the web said and just kept their heads down to get the job done (not easy when folks are dancing around the office saying we are worth a billion we are worth billion), but we pushed through and shook off the annoyances only a few weeks/months before the crowd sale. That was the step change and since then, well everyone knows :slight_smile: not easy but definitely forward motion and measurable at that. I like it now, but realise the pressure everyone feels as we push launch forward.

15 Likes

So, glad to have taken the effort to join in on the Dev meetings; even though at 5 AM :slight_smile:
And certainly, the latest updates have been the tying process of the layers for safe network to happen.

So looking forward to setting up a segment of the test network; and also hoping to see many more nodes running with us; safe network, the decentralized team; global by default;

Biased only to those nodes that give true correct datas - safe network in a nutshell :sunny:

5 Likes

I wanted to comment but I already wrote a decent response awhile back. So Iā€™ll just post a link to it. :smile:

5 Likes

@dyamanaka nice thread; I bookmarked that one

2 Likes