what’s it all for? There’s a lot of economic stresses and strains atm and what we’ve become used to becoming brittle in places. Hopefully it doesn’t get too bad but I did wonder this week about the prospect of existing internet servers failing or being lost and the data on those. Blessed are those who make backups but getting to a place where we don’t need to worry about where the data is, will be nice++.
Safe as infrastructure
Thanks again for all the progress and updates… good to see
Might want to turn this update into a tutorial or faq or piece of general documentation or something. Though it needs a link to the glossery for things like what defines a section and such. Like if something needs clientauth + secionauth then that begs the question what sectionauth is and what defines a section. Same for the other authorities. I know these sound like stupid questions but this whole coding process is very abstract at best. Near as I can understand as is you’ve got the clientauth which is the client, an individual’s authority, then you have a section which is a conclave of regional elders (I have yet to wrap my brain around how xor space works). And what is DAGauth? What is a DAG anyway? This is why we need a glossery with clear definitions.
I have started to doubt the way the development is done at the moment. I mean the “launch as soon as possible” approach. I think it leads to all aspects of the networks being developed in sync, which is in many ways a good idea. But there is the problem, that it seems to be leading loss of incrementality in development, and that way all of the possible value realized all at once, but rather later than sooner.
I have to support my living costs by selling my crypto holdings regularly. I am now 100% MAID because I have sold everything else. And I have started to sell those too. It looks like I’m not going to have any left at launch, if it is going to take more than six months until that. Of course, if there would be some advancement, that would prove itself a bit more in a comnet -type of situtation, things could get different.
One such thing could be a network, that is able to hold itself up without doing much, or anything else. I mean something like the “no data” network, that was able to get past many splits several months ago:
I thought something like that was cooking, when I saw this PR, but it has not had any commits since over a month ago:
So, what I wish now, is some stripped down version, that could hold itself up in a distributed way, meaning from machines from our homes. I know that without data, farming, tokens etc. the whole network would be meaningless, but it is also true that without the working structure, all the developement in data, DBC’s, farming, etc. are useless.
Another thing I’d like to change is developing for Linux/Win/Mac all at once. I am not sure about the magnitude of the effect, but it seems that quite often the CI tests are failing in one of the systems, and fixing the problem there can take hours / days. I have said that before, and I was refuted. It may happen again, and I may be wrong here, but the observation, that fixing some problems in Mac for example, when everything else are working, takes time is absolutely true. And when I look back and see all the solutions, that has not been working, I wonder how many hours was wasted in trying them on all the platforms?
So could we change the development towards less parallel approach, please? You already found it beneficial to take step back from multithreading, maybe it would be good idea put some threads aside in a metaphorical sense as well?
I posted these comments in hopes of getting an outside point of view about how the network is being developed. I think it could be very helpful to the team to get a different perspective on this. I did get a response which asked me who would give this outside perspective. My thoughts were maybe folk from the Mozilla Foundation or Rust Foundation could help or point us in the right direction. I get the feeling things have stagnated and an outside point of view would be very helpful. We seem no closer today than a year ago. When we ask when we are told when it is working. It’s been many many years of this same response. I’m not trying to be offensive or disrespectful to the team. I think a fresh set of eyes on this could change things dramatically if minds were open to input.
It seems from my layman perspective that the data and DBC work is almost there. I have not heard much about farming, though. I agree that a solid, stable testnet would be amazing. But maybe it makes sense to include data/DBCs in that testnet.
From my perspective, it has always looked like something is almost there. That’s why I have sold other coins, not this one. Well, that and the fact, that I don’t like any other project. I don’t really understand, or subscribe to, their goals.
The thing that seems elusive to me is if the network is able to make decisions about it’s own structure or not. I am sure I am missing many dots, but the ones I see I connect roughly this way:
April 2021 we had a testnet, that was up for five days. It was unstable, and making wider use of AE was thought to be the solution.
Late 2021 it was realized, that AE is leaving the network in some kind of split brain situation, and consensus / membership stuff was thought to solve that.
Spring 2022 consensus / membership was realized to still leave the network in undecisive state in some situations.
June 2022 came the new idea to borrow code from Poanetwork and it was thought that:
Now we are talking about:
All the while when:
This confuses me, and makes me think I am not the only confused one around.
I think one the coolest progresses recently are those that allow better analytics about what is going on in the network. I’m talking about statemaps, flamegraphs and all that. I am all for them. And of course DBC’s and actually everything else is very good too. People are doing good work in their tracks, nothing to complain there. It just seems to me that the efforts could be organized better.
And I am not at all certain, that anything is “close”. I think that development strategies should be set so that there is as little time used in finding the wrong answers, until they hit the right one. Because one just cannot know what works, until it works. Nothing is ever “close”.
I think we all feel your frustration @Toivo. I imagine the team feel it most. All’s we can do is keep faith in the team and hope they overcome the unknown unknowns quickly when they arise. Sorry your circumstances mean you are having to sell MAID
That was a kind response from you, thanks for that.
But I think I raise valid concerns and would like to hear some input on those too. Like is it really a good idea to develop on three platforms at once? Is there any metrics how much it has taken time to fix some bug on Win/Mac on to-be-discarded code, during the years? I bet months.
And to make sure, I am not criticizing of not having a direct route to the right solution. It just seems to me that developing on all the systems in parallel makes sense only if the path forward is clear. Different dead ends are to be expected, a natural part of the process. There is this persistent illusion of “being close” to solution, that is just false. You cannot estimate a distance to the solution, before you are certain that it really is the solution, and that is in hindsight.
And all that is certainly not the team’s or anyone else’s fault. It’s my poor judgement on how I should or should not invest my money. But of course I raise my opinion here at the moment, because I think that another way forward would be quicker to move the price up, than the current approach. You know, for a while I could keep the faith that eventually, in the not too distant future my economics could get better. Now I am the fan, and the stupidity of thinking that “the thing” is not is approaching has hit me full force. I can see it coming.
I do have faith in the individuals in the team and their general intelligence, problem solving abilities, persistence, morals… etc. I just find the approach at the moment to bee too all-encompassing. And I have flipped my opinion here. A couple years ago I spoke against “routing only” approach, because that seemed so dry to me, and maybe because I thought that problem would be solved sooner.
Or am I just skewed in my thinking, that the sections splitting etc. is THE thing?
I’m not the person to answer those questions and my guess is only team members would be able to. I was under the impression that we had all the parts in place and “just” needed to make them work together. It does seem to me that some of the parts don’t want to work together but what do I know If parts don’t work together for me I just hit/cut/weld until they do
And if I was a software developer, I would develop programs, that has purpose-build “bugs” in them, that a user could banish by banging the keyboard harder, throwing the phone, yelling at the device etc. That’s what good UX would be like
Way off-topic, but I remember when years ago a friend of mine told how he really doesn’t like household chores, except beating rugs, because there you can use physical power in a way that is so rare nowadays.
Maybe the random seed for your keys to your safe could be generated by shaking your phone (violently). There could be some grains floating in the display, and the more force you use, the longer and faster they float around after you shake. Or maybe you brake an image of a vase on your display by shaking. …aaand then your key is ready.
Hmm… I am actually getting half serious here. Some kind of funny twist could be a nice touch from marketing point of view, cryptography and security is always so damn serious… I propose everything else is put aside immediately in favor of this