Or its on its way to collect
I feel that way too, but I think the key here is that it is just feeling and seeming. We don’t have any guarantees that the current path is going to give us a functioning network. PARSEC seemed to be the solution on the paper, but when implemented, it had performance issues to the point of failure. As far as I see it, we might run into performance issues (or some other crucial detail) with CRDT -based solutions as well. And because we don’t know what it takes to make this thing work, we really don’t have any knowledge of the distance to the finish line. I haven’t heard anyone using any other “metrics” to the distance except feeling.
By the way I still don’t know why exactly PARSEC didn’t work? Do we know for sure that it was just sheer size of the gossip graph, or was it some bug causing memory issues?
Whatever it was, I think the use of CRDTs makes very much sense. But reading the last dev Update about bringining in CRDT specialist because of order -related things makes me wonder if we will eventually go back to using PARSEC somehow in combination with the benefits of CRDTs.
And if CRDT alone is the answer, I am a bit scared there may be another project somewhere lurking in darks. The tech has been around for some time.
My thoughts here don’t stem from any technical knowledge, but these are the kind of speculations going on in my head, preventing me from buying and driving the price up for my own part.
This could be the case, but I don’t understand why? I sincerely mean that I don’t understand why smaller market should equal to lower price? It seems to be the case because after we lost Poloniex the price dropped immediately… but after a while it recovered with no major reasons from the project itself.
What is the thinking behind the idea, that bigger markets would automatically lead to higher prices? I get the idea that shrinking markets are bad news and growing markets are good news, but when the market size is past the “news” stage, why would it matter? It probably does, but I don’t just understand why?
(Just like I don’t understand why table salt is so cheap? The cheapest salt on my local shop costs about 1€ / kg and the most expensive about 23€/kg. 1 kg is maybe somewhere between 6-12 months of a household use. No one would give a damn if the cheapest salt would be 2€/kg. So why is it so cheap? It seems to me you could double the price without any impact on the sales. You never see any advertisement saying “Hey we have really cheap salt here.”)
The order issue is not one of getting order, most CRDTs are ordered via partial order to get to strong consistency. The order part here is a small issue and exists in every system. I will try and explain
We have a counter, simple monotonic counter
A sends operations to B
A -> B 0
A -> B 1
A -> B 3 ---- Here we cannot apply this so we wait
A -> B 2 ---- Here we have the missing count
With the wait above we have a counter 0,1,2,3
However we did not need to wait for 2 above. We could have said back to A “hey I cannot apply 3 you are missing 2”. Then A could send us 2 & 3 at the same time.
This is the order issue we are fixing. If B cached out of order ops then it’s an attack. A can send it millions of out ticks and never send 2. That way our queues fill up, plus we have missed a message perhaps and will never ask for it!.
Parsec in conjunction with routing worked in the manner that missed messages were a problem. Some would even kill the network. Yes incredible, I know, however when I took over the cto stuff it was one of the first things to fix. Make messages guaranteed to deliver with prob close to 1, but also and more importantly allows nodes to see a message was missing and ask for it. Even then Parsec still could not work, purely down to never completing tasks as it took too long. Parsec was made (much to my disapproval) production ready, it was months of work, but tested using “mock” objects. So production ready but never used in production. I won’t say more about that
We call this lazy messaging, and as we have partial order in most data we can spot out of order or missing data and request it. It’s a very simple thing and obvious when you look. So very basic Engineering really, no calculus, no fancy papers, no great greek letters and names, so maybe not sexy enough for some Engineers to spot. But these basic parts of Safe were not obvious to some members of the team for a long time. Now these points are second nature to everyone in the team. It’s easier that way
Wow, really good explanation, addressing all the things I have been thinking, but not daring to ask.
(Mods: maybe this should be copied / moved / linked to dev update thread?)
Solid update through and through! I’m really interested to see what comes of bringing on a CRDT consultant.
There’s so many potential applications in a distributed network for CRDTs. In my mind that implies a lot of room for further innovation and exploration, which only adds further evidence of both feasibility and commercial viability from the perspective of those looking in. Not to mention the excitement just involved with chasing those novel applications once the concepting is solidified
Agreed, with this work we will have fraud resistant CRDT’s (by having Actors sign ops and replicas sign causal order) and also with the deterministic secure broadcast mechanism we show how the Initiator (Actor/Client) can gather the majority votes (consensus an operation is valid) and provide that. So this means the Actor can say here is a signed operation and here is Authority (NetworkAuthority) that I can do this. Now you use your secured CRDTs to provably get strong consistency with your ever changing network of millions and millions of data items that all follow this pattern.
So this simplifies consistency in a hostile network to some incredibly simple rules.
The end of this mini 3 week project cements all of this in place. To me the possibilities for massively concurrent apps is a game changer. An important stepping stone here. I have a next step, but well after launch.
So is the goal to launch a testnet after all this CRDT work is done? As much as I like the excitement in the last few weeks of updates. It’s getting difficult to track what goal leads to a hands on community network (even if it only ran for a day). safenetwork.tech has been out of date for months, it still references PARSEC. The GitHub project plans also reference PARSEC.
The weekly updates sound great, but they are becoming littered with out of date links and materials. Now you have a much more concrete grasp on the outstanding requirements. Having the confidence to state 3 CRDT requirements, seems like a huge leap in a clear path to what’s left to build.
I feel like it’s time to be brave, and clearly state what’s left until the community is able to help with testing.
Always brave it’s my nature, I think my country makes us like that
I just posted text in the in house channel regarding this.
" we could all get agreement on the crdt/dsb etc. pattern and how that fits with
SectionChain (BLS) as a Section Actor?
If we get the pattern sorted for routing then all other data types are simpler. So we have
- Secured CRDT’s
- Lazy messaging (to handle out of order anything)
- Clean definition of Actors
Then our whole code base can be analysed and where we have data not using these patterns then we consider that data inconsistent unless we can prove it’s not and prove it with no edge cases (se Blobs/SectionChain etc. are OK).
This can give us the implementation rules of the network. It lets us then define test rigs (no seeded RNGs etc.) so proptest/fuzz testing and also firm down the
I hope this gives us provably correct everything and an impl anyone can recodnise to extend it."
So we are at clean up and code check all algorithms are consistent. i.e. now we can formalise the methods. So while we may have internal testnets working again (Friday) and maybe the community very soon. I am focussed solely on ensuring formally provable rules on the whole code base.
This member of the community is happy that they are getting on at maximum speed, and prefers that to spending time updating the website and docs which few people will encounter until there’s something to attract attention.
Thanks, I think that post will make following the next update and tracking progress easier
This is , I think, mostly due to the team consisting of 98% engineers and one admin person to make sure wages get paid and absolute essentials.
Unfortunately the folk who would normally have looked after these documentation/communication functions are no longer around.
It MIGHT be something the community could pick up on, but I suspect whoever volunteered to do it would spend so much time asking David and the others “Is this what you really meant?” etc that we would distract them from what they seem to be geting on so well with.
So maybe we just have to shut up and wait, cos by trying to help we actually delay the introduction of a testnet… Incredibly frustrating but as I said earlier, we are well over the Florida stateline and heading directly to DisneyWorld. Traffic remains light. Some say signs for Disneyworld long-term parking have been spotted
I get that all the effort is going into the build, but also what’s the point of a weekly update if the context of the work done, and remaining work is so difficult to follow because there’s no fixed point relation to a roadmap (even if it sounds exciting, I’m not sure how excited to be!). Grouping things into high level goals (like David’s last post), takes only a couple of minutes. and can be just included at the bottom of the weekly updates. Now there’s no need to update the websites or documentation. Because we are all aware in plain English what the overall outstanding goals are.
Updating the website might be something a community member could do. Shouldn’t be too hard I wouldn’t have thought. I have a profound dislike of out-of-date documentation and tend to use the state of public documentation as a proxy for the health of a project, and I’m sure I’m not alone in that. I’m crazy busy at the moment otherwise I’d offer to have a go.
And I’d help you - as I am sure would many others, BUT none of these docs could be released without getting the technical approval from team members - who IMNSHO should just be left alone for the next week or two - I feel we are really getting to the tickly bit now.
There are 2 types of documentation. One kind is internal, that is to assist the devs in keeping track of what they are doing. That you can do however you want.
External documentation is for the public. In a fast moving project, there should not even be much external documentation. Having to update after each little change slows down the project and people always worry if the documentation isn’t updated. Bureaucrats love documentation and that should tell you how unimportant it is. I would cut out all the documentation for the parts of the project that change and provide only very high level objectives that are easy for the public to understand. You can always provide more technical details on request.
I think the fundamentals are the critical high level docs. Then there can be levels of more technical docs below that. I really hope the recent work will simplify all of this a lot.
Modern Silicon Valley companies will generally do docs as code so community can PR updates where needed and take the burden of the core team and have it auto deploy via pipelines upon merge. This is essential for quality public facing websites with docs these days. I really like how konghq does theirs for the API Gateway product: https://github.com/Kong/docs.konghq.com
Building on that, just want to point out that the websites are open source as well as the rest of the project. If any of the docs aren’t current/detailed enough for your taste and you are inclined towards web development, that’s the link to the safe network tech GitHub page for anybody reading this who’s interested
Yep thanks - plan to take a look when I have a bit more time. The level of detail is just right I think, but the Parsec references need to go.