It will not cover what fixes are planned for future.
I don’t know about others, but I am not interested in more detail, but more overview.
I mean, there is this very detailed level available on the Github, and even I as a non-techie can make some sense of it. Then there is a retrospective overview every Thusrday, when we are told in broad strokes, what has been done the previous week. Then there is a very broad overview about where we are heading, but below that there is a level missing. And I don’t mean timeline.
I’m not sure if I can even express what exactly I mean, but: there are bugs preventing something from working. What is that something that is expected to work once bugs are gone? There has to be some target that is achieved once these hindrances are removed. At the moment I don’t know what that is. Sometime before the projected outcome of fixing the bugs was more clearly stated.
But it is not that big a deal, to do the work is more important, I agree.
And actually there is one detail I’d like to know, and that might be simple enough to answer: What are these “client reg / blob tests” in GH Actions, that so often fail with MacOS while passing Linux and Win more consistently?
Whether you call it more detail or more overview is not important because we agree on the main reason this is not satisfying everyone’s curiosity.
Clearly not, but that’s more work so the main point is whether it is worth slowing the team down for this. I don’t think many favour that based on earlier discussions like this.
If the bugs would just line up one at a time then we would know, but you fix one and it can expose another. Otherwise, it’s just a few bugs and you fix them and there are no more, we all know this is hot how it works though
The names of the tests should tell us exactly what is being tested. registers are the merkle registers that are fundamental data types now and blobs are the binary data. I personally don’t like blob but to me it’s all files and folders and the file content is either a small binary or a data map and that is it. The content parts form the data map are chunks. So blob is a kinda binary large object, I am not sure it fits well here, but it’s basically the file, but as I say when I think file I think metadata + content and content == data map. Here though it’s the file content == blob
I know projects where list of to-be-fixed bugs consists of thousands positions.
When they fix 1 of 1000, it may become 999 or 1003 for example.
But such list is still important.
The choice is
- Find a bug and work on killing it
- Find a bug, list it, allocate to an engineer, fix it, update the list and repeat.
I find 2 to be tedious misleading and only useful for huge teams, but I find huge teams useless at bug hunting like this.
Later, with a stable product then issue trackers etc. are almost invaluable, but right now, they are not.
I liken it to tell special forces to go in and rescue some dude verses a full-scale military intervention. The former is very little detailed planning as the situation changes at a whim and the latter masses of lists and paperwork as the situation changes much more slowly. So it’s horses for courses really IMO.
Thank you for the heavy work team MaidSafe! I add the translations in the first post
Privacy. Security. Freedom
Does the Merkle register have the maximum number of forks? What happens if the attackers continue to make multiple forks concurrently before the forks of the register merge?