Please Read: Digital Ocean Maintenance Issue


#64

Hello,

I now also get this error with the web-hosting-manager v0.4.4.
I didn’t get it the last time I tried, but that was probably before I made a WebID with the PoC WebID manager app.
So I guess that the WebID causes the error and I need a newer verstion of the Web-hosting-manager?


#65

@draw, you should be fine with this WHM, and the upcoming v0.5.0


#66

This WHM works indeed, thanks!


#67

Just going to reference my dev forum post so it can get attention:

Setting up a digital ocean SAFE network for MaidSafe Asia, any assistance or input is greatly appreciated :+1:


#68

I think there is a bug in safe vault code when several nodes disappear from a section at the same time.

I have created a small test network with min section size = 4 in routing config file. My network has normally 4 nodes which means that all data chunks are duplicated 4 times with one copy in each vault. I have developed a utility to display the number of immutable data chunks and the number of mutable data chunks in each vault. And I observe that these 2 numbers are the always the same in all the vaults.

This kind of setup is useful to observe the number of chunks created by a command: for example this is the way I found this issue.

At one time I deleted 2 vaults by accident in a short interval (I didn’t notice the delay between them). No data was lost because the 2 remaining vaults had still the same values for these numbers. Then I relaunched 2 vaults (one by one, with an enough delay between them) but the 2 new vaults display lower values for these numbers, meaning that some data is not stored in these vaults.

This means that some chunks are duplicated less than 4 times which is not a normal state. This fundamental invariant is not respected, which can be the source of ulterior loss of data.

I think this problem is possibly the root cause of the bad fate of the first alpha 2 network and other past networks. I was considering launching a community network (and this test network was in preparation of it) but the discovery of this problem is blocking for that.

Could someone at @maidsafe analyze this problem?


#69

This could be you losing quorum. So if you had a group set at 4 then you need 3 for quorum. So new nodes my not believe the data, usually that would mean a stalled forever section (well in alpha2). I hope this helps, but let me know if not.


#70

Thanks for your explanations. I understand now what I observe when I try to reproduce the problem with simultaneous deletion of 2 vaults : the 2 numbers I mentioned remain at 0 which is a symptom of the state you just described.

In my initial experiment these numbers weren’t 0. But as I said, I didn’t pay attention to the exact timings and probably the first of the new vaults had time to receive some chunks before the last one died.

Edit: To be clear: what I observed was normal and there is nothing to worry about.


#71

Very troubling info, indeed. I hope this will be remedied when MaidSAFE goes live. I haven’t stored any data in the network, myself. Do I have to go through the invite process, as well?


#72

Yes it will.

For one the vaults will be at homes without the need for Maidsafe to run the vaults themselves on digital ocean machines (or any datacentre machines) which can have these things happen

Data republish would solve the issues anyhow.