Update 30 September, 2021

I didn’t study the Merkle explanation the first time, but after @Blindsite2k’s post above I went back and dug in a bit harder. I’m not a mathematician or a programmer, and certainly could use more definitions, but I found the presentation quite informative. I get a sense of how it works and how much it solves.

Thanks team for making this much detail accessible to the simple folk.

11 Likes

It’s a balance between updates as tutorials and updates as informative for people close to knowing the state of the art. It’s hell really as it is assumed we can bypass uni degrees and large textbooks to explain things. We get castigated for not saying what we are doing and then castrated for saying what we are doing. So it is a fine balance and we try, it will never please everyone and we don’t even try to anymore.

I hope what we publish gives those technically minded and inquisitive, enough info to tell others, hey this is important. Then the convo flows, but as I say informing everyone of every detail is never gonna happen, no matter how much we want it to.

25 Likes

It’s all written out and quite well I might add but the trash can is just a visual aid/representation of a ‘trash’ node that simulates deletion in the Merkle flow. They literally defined the trash node.

I think many can sympathize with you though and I don’t have a strong grasp of it from a purely technical level either but the more technically minded will and it’s important that Maidsafe share their progress and designs for those who do understand to discuss and critique.

They are being transparent and since they are doing mostly nitty gritty work there isn’t much pudding to be spoon fed, yet. Once the technical innovations are all sorted and in the open to settle in, we’ll start getting UI and fun shiny toys. Which only gets closer by the day :wink:

But I personally think you should just reread it several times and try to absorb and enjoy these special accomplishments.

We are in a unique position to know how the guts of the network work before it is live and in the limelight. Some people here might even end up being consultants someday just by following these updates and getting a good grasp.

15 Likes

A Merkle reg tutorial!?

“Oh wow! that’s awesome! My mind’s blown to smithereens! Thank you maidsafe!”

3 Likes

On a more serious note, when were the last two testnets and when can we expect the next one?

Is there still one running?

I’m satisfied with my vagrant boxes currently, so I want to test my docker image again.

2 Likes

Not publically, if you check though you can run testnets locally easy enough nd now in CI we can run testnets with a simple PR message that executes these, you can too but need your own DO tokens etc.

4 Likes

Continuous integration? Pull request? DO?
I’m a bit lost now, because I can only associate the first two with git runners
and wouldn’t that mean I would have to write a seperate github projects for those wanting to start with root nodes?

Sounds like an invite @Josh, if you give it a go I’ll join in!

5 Likes

27 posts were split to a new topic: Testnet tool

I am happy to see all the progress in the updates as well as in Github. But I am also a bit lost in regards to what are the expected next steps?

I mean that in June / July it was said that stability is the thing lacking before public testnet. Then the expectation was to have stable testnet wihtout DBC’s integrated yet, and have them in later. Maybe the plan and blocks (stability) are still the same, but I think it could be stated a bit more clearly? Because now it seems we are approaching public testnet by the community first and that raises a question why that is possible but not an “official” one? Maybe there are so much warts still, that ‘warts and all’ -approach does not quite cover the case yet, but you’d still like to give us something?

I’d like to hear a bit more about how soon your internal testnets fail, and what happens when they are failing? What is the failure? Something like “structure breaks after splits” or “structure works fine, but uploading and downloading files does not work reliably”…

3 Likes

Most if not all github commits right now are addressing bugs and these cause failures. We have a few guys working on test/bug/fix daily. So right now it’s almost all stability and each commit states what it’s fixing. So best to use that, otherwise, somebody will need to tabulate all the bugs we are fixing to reproduce that info in some easy to see the whole picture graphic. I feel it’s best, although tough and distressing, to let the guys keep squashing these issues. It takes a wee while, but that is just our luck more than anything.

While bug hunting like that is happening, we are all distracted and not able to say much more than it’s getting more stable, this beast.

The other part of the team are DBC working and looking at how to integrate a totally private money scheme in a network that requires farming and proof of payment or contract based mechanism to earn tokens. That needs some heads on it, but most heads are bug hunting.

19 Likes

Just to add, bu ghunting trends to require consistent focus, as does the design of a crucial feature like integration of DBCs so it makes sense that the team are mostly quiet, although we can see the merges happening on GitHub daily, which reflects progress being made.

I don’t think it’s worth distracting or breaking concentration to give more detail on progress. Maybe someone from the community cares to help others by tracking merges and making a table so this doesn’t impact the team?

8 Likes

It will not cover what fixes are planned for future.

1 Like

I don’t know about others, but I am not interested in more detail, but more overview.

I mean, there is this very detailed level available on the Github, and even I as a non-techie can make some sense of it. Then there is a retrospective overview every Thusrday, when we are told in broad strokes, what has been done the previous week. Then there is a very broad overview about where we are heading, but below that there is a level missing. And I don’t mean timeline.

I’m not sure if I can even express what exactly I mean, but: there are bugs preventing something from working. What is that something that is expected to work once bugs are gone? There has to be some target that is achieved once these hindrances are removed. At the moment I don’t know what that is. Sometime before the projected outcome of fixing the bugs was more clearly stated.

But it is not that big a deal, to do the work is more important, I agree.

And actually there is one detail I’d like to know, and that might be simple enough to answer: What are these “client reg / blob tests” in GH Actions, that so often fail with MacOS while passing Linux and Win more consistently?

1 Like

Whether you call it more detail or more overview is not important because we agree on the main reason this is not satisfying everyone’s curiosity.

Clearly not, but that’s more work so the main point is whether it is worth slowing the team down for this. I don’t think many favour that based on earlier discussions like this.

2 Likes

If the bugs would just line up one at a time then we would know, but you fix one and it can expose another. Otherwise, it’s just a few bugs and you fix them and there are no more, we all know this is hot how it works though :wink:

The names of the tests should tell us exactly what is being tested. registers are the merkle registers that are fundamental data types now and blobs are the binary data. I personally don’t like blob but to me it’s all files and folders and the file content is either a small binary or a data map and that is it. The content parts form the data map are chunks. So blob is a kinda binary large object, I am not sure it fits well here, but it’s basically the file, but as I say when I think file I think metadata + content and content == data map. Here though it’s the file content == blob

8 Likes

I know projects where list of to-be-fixed bugs consists of thousands positions.
When they fix 1 of 1000, it may become 999 or 1003 for example.
But such list is still important.

1 Like

The choice is

  1. Find a bug and work on killing it
  2. Find a bug, list it, allocate to an engineer, fix it, update the list and repeat.

I find 2 to be tedious misleading and only useful for huge teams, but I find huge teams useless at bug hunting like this.

Later, with a stable product then issue trackers etc. are almost invaluable, but right now, they are not.

I liken it to tell special forces to go in and rescue some dude verses a full-scale military intervention. The former is very little detailed planning as the situation changes at a whim and the latter masses of lists and paperwork as the situation changes much more slowly. So it’s horses for courses really IMO.

18 Likes

Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:


Privacy. Security. Freedom

6 Likes

Does the Merkle register have the maximum number of forks? What happens if the attackers continue to make multiple forks concurrently before the forks of the register merge?