SAFE Network Dev Update - January 30, 2020

Happy Birthday!

5 Likes

Hi David, seemingly everything is getting closer to Fleming launch!

It would be nice when there would be a small summing up of what Fleming will be able to do when launched. I would very much appreciate that!

6 Likes

Except for upgrades, Fleming is everything else SAFE was always meant to be, So no upgrades, probably but feature complete.

24 Likes

Guys, I am kind of lost, how many month are you from Fleming RC?

Ah, the million dollar question, again. Answer - No one knows

3 Likes

Can’t wait, thanks for your persistence :+1:t2::+1:t2::+1:t2:

7 Likes

I’d imagine it goes something like this: suddenly, enormous leaps happen for things currently still in development. So, until those enormous leaps happen to basically close the circuit for a component being completed, it seems like there’s an indefinite amount of time left, but isn’t. So 100% progress points happen randomly, so expect major progress at any time possible. That’s my take on it.

6 Likes

I don’t quite get it. Since money/time is running out, and since there seems to be no approx fixed timescale, shouldn’t we have at least a sustainable model to finace the project in this phase of research? I followed the project loosely and one thing that seemed to happen repeatedly were ruptures in the company due to a conflict between burnrate and progress what eventually lead to an immediate need to create more financial resources. I assume it would be much more transparent and sustainable to have a patreon style model or some other kind of income that allows for the needed time to work on this project for the next years on a regular basis.

As I said (I think 3) years ago, I don’t think it is necessarily an issue to take years in development - maybe it has to be done over years. However I still believe it is an issue if there is no perspective with regard to the resources to finance the needed work. We all know being close to the end can mean years of progress.

3 Likes

IMO, this is why we need milestones like Fleming. There may be theoretically better ways to do some things, but right now we have no way of doing them. There is also incomplete evidence that it can be done at all.

I always have mixed feelings when I read posts about newer, better, ways of doing something. On one hand, it is great that the team is interested and engaged enough to be thinking ahead and moving the state of the art onward. On the other hand, I worry that this is a distraction to deliver something useable now, which can sometimes seem less interesting for the devs.

I the spirit of agile, we should be happy to deliver something useable first, then optimise and improve upon it. This gives stakeholders confidence that progress is moving forward and in the right direction. It also gives users and developers something to build actual, useful, content and apps on.

I am sure the focus remains strong on delivery with acute awareness of funding levels. However, getting something useable out there, something which can be critiqued and tested, will be invaluable. It will show it can be done, along with highlighting areas for improvement. It will give kudos to the team and the project that no amount of writing and presenting can deliver; showing something working is worth orders of magnitude more to everyone outside of the team.

So, please, can we keep the laser focus on getting something out there. It may feel like a compromise, maybe even incomplete, but it will be real. It will be a platform which we can all engage with and move forward on. It would also be hugely exciting to see the local and wider community reaction!

Keep up the great work everyone!

24 Likes

This is 100% the focus and recently even more so I would say. However I also think we gain a huge amount when the Engineers can widen their scope, learn new things and read constantly about new algorithms (even algorithms over 10 years old). For instance the constant crdt I go on about (or cassandra / dynamo etc.) is something that requires deep thinking about types and the operations on type. The nice thing is there are provable rules now, so if a type has certain fundimentals then we know it merges and so on.

Then we have strict order via PARSEC (where our own Pierre really shone in it’s creation).

They are both different things for different purposes/environments,

However full order is kinda like wall clock time, total order. We know there is no usable total order in the planet. Time is not a reliable thing and for very good reason. Even is every clock was synchronised to 100% accuracy (impossible) then it still fails us.

So if we dive deeper looking at each of these types etc. we start to understand much more about the network, no only SAFE but every network and data type. I see this as taking off a blindfold.

We can have every single thing in SAFE going through strong ordered consensus and that is not good. We would have this false belief that no matter what we consensus then it must be true and that is not the case. If you have one operation that depends on the results of another operation then you get into all sorts of complecated logic to control the system inputs to also be ordered, or you throw away valid consensuses events, such as a possible double spend detected because we put a deposit in an order behind a spend and due to latency of one decision so on.

With everything going through strong consensus we could have a twitter feed that only allowed 5 followers who only post once every 30 seconds and so on. We would need a way to stop clients posting if the post rate overtook the latency rate of decision making. Again unless we do more complex code outwith the consensus order alone.

Then crdt data types, they focus more on weaker order, or in the case of fast changes interleaving of local order when getting to agreement on what order things happen. In fact they don’t really care about order but do care about causality.

tl;dr No silver bullet ,but realising when we look deeper that there i no such thing as current values, stuff changes, messages drop, they do arrive out of order, local events are out of order in each Elder etc. So in many cases we can remove care about order, that’s fast and easy. In other cases we still cannot reason out wanting strict order and if these changes are not too concurrent, then we can work with strong consistency for those.

Bottom line, looking deeper will reap much bigger rewards as we complete data types for Fleming. I prefer knowing there are fundamentals we can use instead of me just saying data should manage itself and it’s updates and be usable in real time with no-waits.

So deeper understanding will speed up release, I don’t see it slowing release at all. If our Engineers (me included) know more of all available weapons, we will be stronger. This space changes very quickly and we have not considered starks/snarks/bulletproofs/commitments and more. These will be hugely important as we move on, but after launch. Before launch the deeper we all understand our data an allowed operations the better IMO.

I hope this makes sense? I do ramble a lot. I believe understanding what we can provide via strong consensus now, then we can, however there may be stuff we don’t want that and we would not release those API’s and have to consider the types themselves and how we can handle them at high speed/low latency and all the trade offs that requires.

21 Likes

Whoa - really cool explanation. I think kind of like a “fuzzy logic” thing. We should definitely include something in the marketing to promote this technology when the time is right.

4 Likes

This may be true when it comes to final release. But, as @Traktion points out, intermediate releases of “something useful” are needed. I think it may even be worth it to risk slowing down final release, if that means we can get an intermediate working proof of concept out earlier.

3 Likes

We are currently testing actual vaults :wink: So the drive to a working Fleming testnet is ongoing right now. Farming is still not in place and the DBC talk is super interesting there for 2 reasons

  1. Complete anonymity
  2. Speed, we can pay for actions at destination and not in a client src section then the actions being done elsewhere. If we do the later it all goes through parsec which will have some latency (regardless how small) and we can avoid that.
    This is exactly the team rushing to Fleming, but I don’t want them blindfolded I want (us all) to approach Fleming and beyond, “eyes wide open”.
19 Likes

Am I reading too much into this or is there real concern about speed?

looks like many think DBCs and CRDTs will delay Fleming, I hope @dirvine and the team will prove us wrong soon (weeks)…

1 Like

No, not really, but putting everything through anything that has high latency of any kind is a bad design. As far as we know parsec is much faster than most CP mechanisms. It’s just too easy and not wise to make everything a nail for our hammer :slight_smile: We just need to be able to look deeper at many of these things.

17 Likes

Makes sense to me, especially the making everything a nail for your hammer. PARSEC could be the fastest consensus algo and yet it still has a limit, in its own section, and collectively across sections. Forcing everything through it is a bottleneck when you have another tool that can achieve the same end result yet in a deferent way, on its own. If you think about how much power/time/torque is lost in sending engine power through a drive terrain, then CRDT is like putting an electric motor right on a wheel (Data types) which is such an enhancement that it gives you massive edge. Obviously still need PARSEC for consensus/node ageing to mitigate against Sybil attacks etc, perhaps it’s most appropriate application.
Then Digital Bearer Certificates (DBC) are another tool that prevents shoving more through PARSEC, network operations and transactions, of which with micro payments there will be many. Plus offline capabilities that could literally equate Safecoin to cash that can be both physical or digital, yet always be anonymous. :exploding_head:

I want Fleming as much as the next guy believe me but we need to be faster than TOR, faster than Freenet, Zeronet, and the rest and as close to as fast as the current internet. Too slow and the average person won’t care too much about privacy and security benefits.

Look at the speed of progress in the last half year, insane progress. These will be seen as last minute tweaks in a few months but are important enough that it wouldn’t be easy or safe to do with this thing on the road and once this thing hits the highway there’s no turning back. Think about that. Just a little more patience will pay off immensely. That said, let’s try to get us on the road summer 2020 when the we can ride with the windows down. Okay enough car analogies.

25 Likes

How big are CRDT and DBC in terms of complexity of coding? Is this as large area to work out as Parsec or node aging? (which took 6+ months atleast) or maybe the code is not so deep and time consuming? But prob no one can answer that.

They are both much less work (learning and exploring ideas rather than research/ invention like PARSEC), and not essential for Fleming.

Having them in the toolkit will however maker it easier and quicker to get to launch, as well as improving the quality of the product by that point.

17 Likes

I thought I’d go through the some of the things that are talked about.

What I think people consider changes here, are among the following:

  • Data Types Refinement
  • Labels and Tokens
  • Data Hiearchy Refinement
  • Permissions UI
  • DBC (Digital Bearer Certificates)
  • CRDT (Conflict-free Replicated Data Types)

Let’s look at each of these, and together, and where and how they fit into Fleming, or a later version, as well as SAFENetwork in general.
(Well, we’ll see if I can tie up that bag in the end).

This is my view, mind you all. I’ve not today got hold of everyone involved to sanity check my assessment about specific work areas. So, it might see some updates.


First of all, a short reminder on how all project pieces interrelate:

Basically, there has for some time now, been ongoing work in Routing - a fundamental layer of the network - with a dedicated portion of the dev team. This work is dealing with requirements for Fleming.

Adjecent layers have been continuously maintained during the time (bug-fixes, missing pieces, adjusting to the requirements).

Also during this time, the work on top-most layers, has been projected to assume tasks that run during the course of completion of the lower levels.

Additionally, there are requirements for MinimumViableExperience, that needs to be ongoing and moving towards completion, even in times of Fleming development.


Long term work

For a long running project, there are a multitude of horizons, timelines and projections for various team goals. Some are pure feature tasks, other are organizational, and in a company dealing in pure development, these often interleave and correlate.

So, while there always is - and has to be - focus on short term goals such as getting things implemented and “shipped”, there are larger movements in a team, actually affecting how well that in the end works out, both in term of progress and quality of output.

The larger movements are like turning a large ship. You don’t see the turn immediately, and you have to put in the force well ahead of time. And it has to be done constantly, as too navigate through the seen and unforeseen. There’s no relaxing there, whether “close to release” or not. (Basically has to do with that you never really know when that “release” is, so letting go of attention believing “we are soon there”, can result in steering off way too much).

That’s why we see a lot of talk of “new things”, “changes” going on. That is part of steering the ship, and the result of openness with the process, sharing it with the community. It could happen in the background as well.
When they are mentioned however, or part of an RFC, it is often just the tip of the iceberg, the final materialization of long term processes. So it can be a bit misleading to think of them as “sudden changes”.

So, we have this, perhaps a bit silly, allegory of large ship navigation. It’s just a compact way of referencing to the above, further down in the text.


Let’s go through all the features, changes and topics we listed in the beginning of this post, and see where in all of this they fit in.

Data Types Refinement

This is two things;

  1. Simplify naming (good for: 3rd party dev UX) and
  2. Simplify code base (good for: network code base maintainability/extendability).

Simplifying the naming is a long standing request from many people in the community, and that it is set in motion right now, is more of a coincidence.

After thorough discussion in the team, the ideas were identified as a desired goal.

What: This is a multifaceted effort. It is part maintenance of the code base, part “ship navigation” of team developer mindset, both of which - as identified above - are always taking place, at any given time. There is also a small element of feature enhancement, (a minor part of it adds a bit of capabilities to one of the types).
When: If time allows, for Fleming, otherwise after.

Labels and Tokens

(With risk of needing to update this particular section due to not having participated much in it).

This is also basically two things:

  1. Preventing data siloing
  2. Providing an infrastructure for a permissions system

I must admit that I am not sure about the ins and outs of this part, the reasons for working on this now as opposed to later, mainly because I have not spent much time there. So, I will need to update this part. But the second part, I think is tightly connected to the Permissions UI, and data management, and it is simply something we must be getting a solid solution for, and there is no lack of work needed for that, we will get as far as we can put in, in form of effort.

What: It is part maintenance of the code base, and part feature enhancement.
When: If time allows, for Fleming, otherwise after.


Data Hierarchy Refining

Similarly as with the mind-shift of “ordered” and “predictable” outcomes in a network, this is about changing the way we think about data structures in the network, because it has unfortunately not been very well adapted to the workings of a decentralized network (that is what is argued in the RFC at least). It’s very similar to the discussion around CRDTs, but on higher level, so dealing with the form and relationships between data types, and how the infrastructure for that is properly designed.

What: Feature enhancement, and part long term “ship navigation”.
When: After Fleming.


Permissions UI

(With risk of needing to update this particular section due to not having participated much in it).

This is a vastly underestimated area. From the little that I have partaken I know that it is enormously difficult to design something that is flexible enough, simple enough, and can withstand changes. It requires immense amount of thinking to even get it OK, and still there is so much to understand still. So we really need to be working on this constantly, and to have it follow through in every step of the network evolution.

What: Requirement for MVE
When: Part Fleming, part after Fleming.


DBC - Digital Bearer Certificates

Potential implementation detail of SAFE Economy infrastructure, not yet an obvious choice of path.
It adds capabilities which we don’t have otherwise. At the moment unclear exactly how, and what compromises would be needed, if any.

What: Feature enhancement.
When: Not decided.


CRDT - Conflict-free Replicated Data Types

David has done a good job in describing this in the posts above; why this is something we need to be thinking about, and why we do it now, as well as before and tomorrow. I’ll reference to those posts.

What: Long term “ship navigation”
When: Not quite applicable, it is a continuous effort with no end product. Some pieces may come in very soon, others much later.


Since I am personally working on 2 of the above areas, I’ll also add a few words about that:

Data Types Refinement and Data Hierarchy Refinement

As part of me joining the team and putting in maximum effort to get into the nitty gritty details of the project and also return maximum benefit from my skills, I have gravitated towards where I can produce most worth for my hours. That has essentially had to do with system design, making the code base cleaner, simpler and coherent throughout. But not only that, I have wanted to infuse knowledge from data-intensive messaging systems that I have been working with. A significant portion of the network code base deals with exactly those things, data structures and their relationships, in a setting which basically is where messaging is today considered a standard. And that’s why we see the system design efforts surface in those areas to begin with.


Tying up the bag

As hopefully the above helps to show, it’s not very clear cut what is happening now, what happened before, what needs to be worked on now, and what is for later. There is also a limitation on how many developers you can have work on the exact same thing. It’s not obvious that throwing in a couple of more developers in an area would speed that development up. People have their different skills and specializations, and it’s not that you can just put people at something and magic happens. Sure from day to day you absolutely can, and we also do jump between widely varied tasks. But long term, people need to be working on things they master or desire to master, both because that’s how people are happy and stay but also because that’s how they do the best job possible, IMO. And then the trick is to get the right composition of people together.
So a lot of different things need to be moved along together, and not always can there be an immediate urgent need that we can put every single one on, but very often there will be plenty of long term work that needs constant tending to - from various angles more or less obvious - and we’re very open with those things. It can then probably seem like there’s suddenly completely new stuff coming up here and there.

Every day I hear the team talking about “How can we make this simpler”, “What can we leave out”, “How can we move faster to release”.
Those things are mostly in the very internal discussions, but it can be seen in GitHub comments here and there as well.

I guess the reason we get to hear more about the “new things going on”, is because that’s part of sharing what’s going on, and also we really like to hear the input on those specific things from the community.


So, those are some of my current view of things here.

41 Likes