Just kidding, thank you for your patience with us. At least you know there’s an avid community, just chomping at the bit to get involved, start nodes, and spread the word of freedom to the world once it’s available
Totally understand and to be honest we’re exactly the same, the thought of being able to start to use something we have been working on for close to 10 years is very exciting. We’ll also enjoy spreading the word with you.
If SAFE takes off, then it might start making sense to register the TLD. For now .safenet is less likely, but it’s there’s little incentive for anyone to apply for such a long TLD. But that can change.
My thoughts as well. It somehow feels wrong to change a technically correct and sensible format because some browsers are very restrictive in their plugin capabilities. Maybe it’s my perfectionist attitude compromising pragmatic judgement, but I’m inclined to say, let them adjust to us instead. We’ll use Firefox in the mean time.
Besides, it’s probably wise to use a separate browser installation for SAFE surfing anyway, if you care about privacy.
We simply must attract and retain as many users as possible because the success of the network depends on it. Then, if we succeed, we get the chance to write new standards, which I think will be more radical: our own secure SAFE OS with the SAFE Browser, that you can tell to include the old net if you really really must!
The ease and simplicity of the safe networks under the hood security is also a very high priority and what will help generic web surfers adopt it. If they can be easily tricked it’s not serving that purpose. I think some browsers like chrome are inherently restrictive, could change at their own whim and it’s their loss. It’s like the team would be doing them a favor, yet you have the opportunity to capture a fraction of chrome users. Pay out doesn’t way out to me personally.
The developers who are close to the code have a viewpoint unavailable to the all but the most intelligent and enthusiastic few. An insight, even if somewhat vague, might prove useful to those who are invested in the outcome of the project and need to make real world decisions.
I think guesstimations by those at the coalface would have real world value. Far better than a blanket ban on this sort of guess work. This information, even if it proves ultimately inaccurate, is not of zero value.
150% agree and applaud your delivery of this very delicate subject. People are hanging off this on a daily basis and that is pressure for the all devs. IMO the team needs to put realistic delivery window of 6-12 months. Hopefully that window can benefit you and make it easy to make your “real world decisions”. After a decade & millions $$ this team has no choice but to get it right … “You never get a second chance to make a first impression” … pressure? Yep.
While I wholeheartedly agree with you, this would seem to be necessary for critical mass of adoption.
Luckily, this won’t be baked into the core of the network, but rather the extension/addon for browsers. I forsee desktop applications (much like apps in regards to smartphones) becoming more popular than whatever can be displayed inside of a browser (which is limited).
I would agree if the predictions we could provide were accurate, but I question that the value of providing well meant, but misleading information, particularly to people who are making real world decisions based on the information we provide. When you look at large and successful companies with some of the world’s leading talent, they very rarely provide new product timescale predictions. Quite often firms like Apple, for instance, announce a product literally as they launch it. This is also not about timescales applying more pressure, believe me when I say that no one could put more pressure on the team right now that we put on ourselves, it is simply about communicating accurate information. I hope this makes sense.
Comparing this to enterprises like Apple or Google is slightly misleading I think. Apple doesn´t host weekly updates on one of their products. Also the initial offering started with an estimation to have the product and to have that product at a certain point. Under this circumstances it is understandable that people keep asking no matter how good the argument is that stuff needed to be written entirely new after shifting to RUST. While the move may have been a good choice it spreaded insecurity, since it shows that long term plans cannot be trusted. Also, the move from giving estimations to not giving estimates at all will spread insecurity.
Is it really a big deal to say: “we give ourself 12 months to finish the core network”? As far as I see some people currently believe that we are only days from network launch - if that´s not true the record needs to be set straight. I think that giving a more realistic time frame (6-12 months) will rather lower the pressure on the dev team - and I think that´s needed to make sure you don´t burn out or make mistakes that will lead to major lashbacks in the future.
This gives the very binary view that we are keen to avoid. The core network is never ‘finished’ and what we do and will give a picture of is the planned and iterative roll out of the network. We have started work on a much more detailed and hopefully meaningful roadmap (don’t ask me when it will be released ) that will indicate the order of the roll out, but will come without timescales initially. It may be possible to start providing estimates for new features once we have the Rust 5 deliverables in place, but as I’ve said until we have that any predictions are meaningless.
Client (A) connects to Relaynode(B). They exchange their PKI’s.
The Relaynode also sents quorum_size. From which group is that? And why does A wants to know any quorum_size while it’s not even part of a group??
Node A does a GetNetworkName Address-request. It’s destination are the NAE Managers. It seems that this communication is encrypted using the public keys from the NAE Managers, because the relaynodes are certainly called relaynodes in the flow chart. But how did the Client(A) got these public keys from his NAE Managers? And how does the Relaynode knows who the NAE Managers of Client A are? It does route it’s messages back and forth. Are the managers just random nodes??
Not poking any Devs here just some questions for random users who know the answers ;-).
Full Public, ID client messages are self validating (name == public key)
Clients will accumulate (this is the sentinel type check, where we ensure a real group sent at least QUORUM messages in agreement back to us) and security validate messages from network groups. On new network or full blown restart this allows the network to auto start form any location without people needing to start special nodes.
atm these are signed requests and can be validated at each hope. There are several messages from a->x to ask for network name. Then x->y to store PublicId (new name) and then for connect messages and each member of y will have exchanged keys and encrypt endpoints to each other. This disallows nodes in between finding IP addresses of network nodes.
The hash of the client name == NaeManagers of that client public name (this is a client trying to promote to node).
Steps of nodes are
Disconnected -> Bootstrap (now a client) -> GetAddress (try and make a node address) -> Node (address accepted and now connected to at least 1 close group member (NodeManager))
Sorry for fast reply, we are all in Google Hangout (this last few weeks 5 of us from 06:30 in HO all day and then we get our 8-10 hours afterwards to gegt some more work done, so mad mad busy, but routing has transformed itself into what it should have been).
Should all be very clear secure and maintainable very soon. Very much a case of clearing up some complexity that should not have been there. Allows us to add and ensure appropriate security is in fact in place.