Starting to wonder if a pattern is emerging. I try to not be biased but I think Safe is a huge threat to other projects and when ex engineers move on to other projects they may feel threatened? Safe IS ambitious and perhaps that in comparison to other projects increases their skepticism but the proof that Safe is infeasible so far seems to be unfounded.
If I might.
I have the utmost respect for David and the rest of the team, both because of the immensity of the scale of the project they are working on and because of the ethical implications on having a truly decentralized and private Internet.
I have invested (for me) a good chunk of money in the project and have been putting more of my savings into it.
With that said, I’m worried about the lack of a direct reply to a few of the points made in that post.
In particular this one:
An additional problematic area is when nodes connect or disconnect. A group can lose
quorum if enough nodes drop out simultaneously, which would mean that
no more updates can be made to the resource. OR new nodes could
replicate the data from the remaining members of the group which are
below quorum. But if the network were partitioned instead, both sides
of the partition would replicate/repair the quorum and continue to
accept writes. The network would then need an algorithm for merging two
Again, I’m not saying the team should, right now, have all the answers.
But some transparency in what, for me at least, appear as fundamental concerns would be great.
An explanation about a solution currently in place, to point out that this is not a problem.
Or “this is challenge that we recognize and we are working on”.
Because each time a fundamental criticism is made I only hear silence.
Again, I have money on the project and I want it to succeed because it’s aligned with values I hold. So if I’m raising these concerns is because I want to see the project succeed, I’m not trying to be an anti-crypto shill.
If you have money in it that’s a risk you have decided to take. No one has taken your investment and certainly not guaranteed you anything. This is an open source project. You simply cannot have the expectations that you have.
At the end of the day, this guy is not a reliable source from what I can tell. It reads like lazy disinformation to me. he comes right out and says distributed systems were new to him, he relies on intuition and yet he knows the problems they have but doesn’t know how to fix them.
What he doesn’t understand is he was working in a compartmentalized environment.
I’m sorry, but that doesn’t address @yvon.fortier his concern, does it? Although you might be right with your analysis, his question that needs a more indepth answer, is:
This, in particular has been discussed many times on this forum. A group size is chosen in any dht to be of a size that’s infeasible to lose the whole group (think whole, quorum etc. as same thing) in a refresh time. A refresh is generally 30 or 60 mins, but in SAFE the nodes are directly connected, so refresh is as close to network speed as possible. This makes this significantly less feasible.
Regardless of that, with disjoint groups and data_chains, merge/split of groups is handled as a natural thing. These make that even better handled and data chains itself does directly address a split/merge/data republish etc. Network partitioning is always something to consider, like earthquakes etc. and this is where data republish makes a huge difference. Securely doing so is hard, but we like hard in MaidSafe
The reason we don’t directly respond to all such points is
- They are generally mis-informed
- A single person takes out one of our Engineers for a while to answer it
- It’s already answered several times
- It’s a point about probability and all possible outcomes are possible, but not feasible
- It’s not a great way to start looking properly at any subject. Engage, clarify, understand and then critique is very helpful though
I take time to answer things where I can, but the cost is very high, so I focus on the forum mostly. Several other email lists and forums have some of us (esp me) going into several days/weeks long explanations that would be very much quicker answered by reading the papers/wiki or search here etc. Initially I did spend that time, but right now if somebody wished to damage progress, this is exactly what they would do. Just state some “fact” that’s not all that relevant or perhaps particular to SAFE and make it sound feasible. It’s very clever, but a time sink. I am not saying this was the case here, I suspect just mistaken in areas that could have been cleared up very easily here initially.
My honest opinion these days, at this point is launch, launch, launch and have bug bounties, security bounties etc. and let folk point to a bit of code and show a real world exploit when we are up and running. Right now it’s not worth digging up old and incorrect assumptions and explaining them all to everyone who makes them. During Alpha we have said there are several security updates, the RFC’s clearly state what is being improved and hopefully that helps everyone.
Hope that makes sense.
David already replied but here are my 2 cents. This is called “churn” (it brings up 50 results in search). It’s where nodes leave and other join the network. It adds to security because all data stored with these nodes now move somewhere else. On the other hand it’s a risky thing because if too many nodes fail at once a group might go from 10 nodes to only 5. This is handled in this way:
- There’s no fixed quorum size to make a decision in a group. So no need to always have at least 8 nodes in a group to make a decision. If 4 of them drop in an instant, there’s still 4 nodes left and they can make a decision with 3/4 agree.
- Nodes have addresses in XOR. So “close” doesn’t mean a thing in geographic terms. Your close nodes are probably all over the world, making the chance smaller that 4 or 5 of them churn at the same time as a region in a country loses internet.
- Archive Nodes are implemented. So even if we have a major blackout the network could be booted from 0.
- Connections in SAFE are fast and “heart beat” signals are send all the time to see if a node is alive or not. So if 2 nodes drop from a table with 9 nodes, the group knows in an instant. If the group got too small they could merge with another group (Disjoint Groups) and my guess is that this will happen in under a second.
This is what’s written by the Lee Clagett as well and I really don’t understand what he is asking for here. Now I’m not an engineer or coder but this is what’s said:
This is not a solution to the fundamental churn and partition problem. A
[newer document] mentions group merging, but does not describe how
groups with different states will be resolved.
This is a comment about Disjoint Groups. It makes the assumption that 2 groups should have the same “state” and therefore consensus on data/decisions and more. This is fundamentally wrong IMO. The idea behind Disjoint Groups is that each group is responsible for a certain range in the address space. So if group A1 becomes too small and the same happens with group A2 than each group still has it’s own “state” about everything they’re responsible for. So group A1 still signs stuff with quorum and group A2 does the same. They can’t conflict because the group sign (quorum) of each group is “law” so to speak. And when they decide to merge they accept each others previous signs and decisions. So IMO the writer of the article doesn’t have a good understanding of DG and therefore the current focus of the devs. But he’s free to explain here in the forum if I missed something in his statements.
Are you sure about this? That would mean you can launch a DOS attack on other close group nodes to take over a group with a minority.
This is what’s in the RFC.
The quorum cannot be a constant anymore, due to varying group sizes. It needs to be a percentage strictly greater than 50% instead, and in a group of size n, a number x of nodes will constitute a quorum if x / n >= QUORUM.
The reason for making the point is this quote:
It reads to me like: we have 8 nodes, 4 of them churn now we can’t have consensus. This isn’t the case as quorum is a percentage of the number of nodes. Not a fixed number like 8 or 5 or 7. That’s what I mean with no fixed quorum size. Each group finds consensus no matter if it’s 4 nodes or 12. Only at 2 we might have trouble. But that’s why a merge should happen way before we’re at this level. And when a group does get at this level due to extreme churn it can’t route new updates anymore as the other close group doesn’t sign a thing without enough signs from the sending group.
Its unfortunate that you have to do this at this juncture because your focus should be, as you say. “launch, launch, launch” and I think you understate the cost - it is extraordinarily high - and you underestimate the impact of addressing each and every seemingly valid criticism of concept Safenet.
Not unlike the previous rut - which you got out of - making forecasts and promises that were not fulfilled, the expectations of a timely responses from you and Maidsafe are now present. This is bad news for you, the Team, Maidsafe and most in this community. It will become clear that no response means trouble. You are doing a great disservice to yourself, the Team and the Community and you are delaying the launch.
Control Your Time, Manage The Communications:
If there is a public claim about Maidsafe, valid or not, you or your designate should formulate a proper public response that is succinct and direct inviting the claimant to submit their concerns to Github or read this wiki or refer to these bug/security bounties, etc. The SAME MESSAGE goes out for EVERY PUBLIC CLAIM. COPY/PASTE. Use twitter and link to the original public claim. You will see in time the real concerns get real attention in the proper forums - Github, RFC’s etc… and the FUD dissipates as attention grabbing headlines are met with timely, simple and effective responses.
Stop the madness. Stop the instigators. Stop wasting valuable time.
Focus on the network, not the FUDsters.
Great job David.
I think you are correct but the assertion that David underestimates the cost of reply is probably a bit OTT. Im sure he does but when prominent forum members endorse the questions being asked I’m certain refraining becomes even harder. The turdstirrer wins and we all lose this one.
I said he underSTATES the cost. I believe he “underestimates the impact” of addressing each and every concern. This is way too time consuming for him or anyone to sit and listen/read every single item being critiqued and disected, by respected members or not. This contributes to the project being behind schedule, pain and simple and these are “distractions” that need to be managed properly.
Hell, most people have no clue that this project even exists and of those that do, few have any clue how it works.
The priority needs to be launch, launch, launch… fuk the noise.
Turdstirrers are easily identified and they have distinct smell about them wherever they go. The way to deal with turdstirrers is to make them work for their win. I romise you this particular turdstirrer hasnt got the backbone or the balls to work for his win. How do I know that? His cowardly methods. Pay close attention to the conspicuously absent response from the unemplyed not-so-prominent forum member and previous emplyed misnomered “engineer” who did some dev work for David.
The reality is we are all losing right now. We have no Safenet, we have no Safecoin, we have no forecast for said and we are spending excessive amounts of time addressing turstirrers or prominent members who cite turdstirrers.
Apologies I still had my pre-coffee eyes on.
It was good that the thread was brought to light. It is also good that people feel they can air their honest concerns on this forum. It is great that David’s response hopefully alleys those concerns.
It isn’t great that the linked post may have been more FUD than fact, but we can’t help that. The worst thing to do is attempt to silence dissent - people will smell fear when that happens.
This is not about silencing dissent. It’s about properly and efficiently managing communications.
It is impractical to think every public question or concern, FUD or otherwise, is worthy of a detailed response. Im suggesting people are redirected to the information that will serve as an answer. If it’s a real question the answer will be found if the claimant is really looking.
I think this needed answering directly. It is different to have an ex-MaidSafe developer making what appeared to be detailed criticism that few here could counter, and I think carries a lot of weight to have it addressed by MaidSafe themselves - only they know the full role, context and background when someone they know and worked with does this. We’ve seen it a couple of times before, and if it isn’t answered properly it leaves open the possibility for those who wish the project harm to harp on about it, without the community being able to point to a definitive rebuttal.
In the above case I was able to do that on the twitter stream so it is now there for anyone who comes across in the future to find. Others can do so as well if they find it repeated, and David need not get involved again.
When its someone who hasn’t worked directly on the code I’m happy to jump in and try tackling them, but unless you know the code being referred to in detail, its hard to do that with someone who has the credibility to suggest they do have that knowledge.
I see your point and would almost agree if it was posted on this forum or directly to David or a dev.
There is no strategy/criteria for dealing with these situations and there needs to be one. What you are establishing here is a dangerous precedent and this guy and dude dallylama can post their concerns everywhere and David and team must provide detail responses each and every time.
Hardly, David and team can handle these as they see fit and I think it is wrong to suggest it is logical to respond in the way you say (in the following full quote) based on my opinion or David responding in this case. The first time an ex-employee is rebutted and doesn’t stand up their case makes it much easier for the community to respond if they do the same again, or to ignore them as you suggest.
Sure they can. At the expense of many things. Did David not already express concern this was taxing. Under other circumstances I agree with you but not with the current state of affairs.
Thanks for the reply, David and polpolrene.
Makes a lot of sense.
At the end of the day, on a very high level, it’s similar to any centralized multi-node server architecture, in which several nodes offer a service redundantly and have traffic routed to them through a load balancer device or protocol.
In that situation, you are also at risk of losing the data. If you have a service on 3 nodes and all 3 die before you can add more members, you lose the data/service.
You need to weight:
- How long is the heartbeat packet timer
- What’s the percentage that a node that is down is down because it’s off-line permanently or it’s switched off for a few hours
- How critical is the data. Do you want 4 nodes with it? 8? 16? Perhaps users could chose, like you would with backing up your personal data nowhere (case of downloaded movies), one place (a pseudo important work document) or in 3 locations (personal photographs).
Feel much better now, thank you!
And by the way, if Maidsafe and the community in general believes it’s adequate, I could write, with help of anyone else who wants to help, a document replying to common concerns.
I heard people in the crypto space make the same criticisms time and time again and it’s very hard to find the proper document to route them to.