Please Read: Digital Ocean Maintenance Issue

As everyone is aware, the Alpha 2 Network is currently running on hosted nodes as we continue to build out the functionality in advance of a full release. The Alpha 2 Network was launched in September 2017 using this approach in order to actively develop and test component parts in public as we move towards the full launch of the Network.

Earlier this week, Digital Ocean (where the nodes are hosted) carried out maintenance which we assume in part was to work on a bug that they had recently discovered. Following this, we noticed excessive churn in some groups of the Network, bringing with it the possibility of data loss. We have always openly stated that any data placed on the Alpha Network may be deleted at some point as we move through the Network evolution towards launch - but it is somehow more frustrating for us when a third party is the cause of this! Of course, these are the very weaknesses of centralised storage models that the more fully featured SAFE Network will solve.

As a result, we cannot estimate what the impact may be from these nodes going offline. The whole point of the Network is that no-one (including us) can view or assess the data that has been uploaded. Also, the situation is further complicated by the fact that all data is split into chunks on the Network. This means that if a file on Alpha 2 (not the final Network) is missing one chunk, it may be unusable in some situations. It is therefore impossible to say what impact this work by Digital Ocean has had. Continuing without responding to this in some way therefore would make the diagnosis of Network behaviour far more complicated as we move along the road to launch.

Therefore, we are now planning to carry out a reset of the Network in order to resolve this issue.

This has a few implications that it is important for everyone to be aware of:

  • All user data on the Alpha Network will be wiped

  • Every user will need to go through the invite process again to access the Network

  • The current plan is to do this on Wednesday 29th August (timing tbc)

  • If you have any data that you would like to reuse after this date, please make copies as it will be unrecoverable after this time

Whilst the decision to wipe test data is never easy (particularly given the great content that has been put onto the Network by the Community to date), there is no doubt at all that the decision clearly represents the straightest path towards the ultimate release of the live Network. Collectively, this is an opportunity for those who have contributed to re-engage with the Network once again and to remind ourselves just how easy and compelling such things as creating websites can be on the SAFE Network.

We will of course answer any questions you might have in the thread below - but in the meantime, we’d like to take the opportunity to publicly thank everyone once again who has taken part in the Alpha 2 Network to date. And we of course hope to see you as we relaunch the Network and move to the next stage!


Can this be pinned to the top of the forum? There should also be notices on, social media and on the Github repositories IMO.


sensible, responsible incident management.
roil on Wednesday till we can all try again.

Can we get >1k PUTS this time?


I’m not too good with simulations and the finer details of the network so just putting this out there, but can this accidental “massive churn” be used to prove the simulations are correct, or even to make them even more precise.

Either way, though they can be annoying resets are expected in alpha and beta stages of most software, so I’m sure we’re all prepared.


And call it Alpha 2.1


I was just thinking this.


While the 1k puts is a bit annoying to be sure, currently, maidsafe is footing the bill for our storing stuff on the alpha network to digital ocean. Giving every account more puts means they have to pay for that much more storage.


Ah -thats why…

penny drops - I had it in my head the 1k limit was something that came in to limit spammy accounts waaaay back when.

Ok Ok request withdrawn.


16 posts were split to a new topic: Scottish chit-chat

Well one thing is certain, this will make us stronger !


Disclaimer, I know this is an alpha network and lots of features are missing, I still think some kind of geo-sections are worth development effort.

I feel that this highlights the danger in treating a virtual machine the same as a physical machine. The network should implement some kind of geo-sections, you could still use digital ocean to but would need to host VMs in 8 different data centres. In the end these VMs could all end up on the same physicals, but you have a much lower chance of 8 physical machines dropping at the same time. If you want to avoid this occurring again you could put 96 VMs on DigitalOcean and 96 VMs on AWS. I’m not sure it would cost you much more.

There would be enough latency between these data centres to cluster together machines near each other, thus creating geo-sections. This allows you to test some pretty powerful features of the network. Can it survive losing a data centre? what about 2, 4, or 8. What about everything from an entire host. This really simulates what could happen if an entire country were to have a power cut / blackout. How cool is being able to survive that!

Feel free to continue to ignore me, but I think ‘proof of data redundancy’, should be something that’s proven in the public data of the network which can be queried by anybody at any time to see the health of the network. Knowledge is power and being transparent with our users with that knowledge should be a core feature of the network. It instills trust and confidence in a trustless network. Thats how you get adaption from the less technically savvy, you clearly show via a dashboard and virtualised view and stats of how ‘healthy’ the network is.


Might be a reasonable idea for alpha networks operated by Maidsafe, and I am sure they have considered it.

But once vaults are run by people then there should never be a geo problem because I believe people live in different geo locations.

1 Like

But farmers, farm in the most economical locations, massive data centres operated by Google, AWS, Azure. We risk economics of scale damaging the redundancy of the network, by too many machines being clustered too close together.

Remember, satoshi didn’t think people would ‘mine’ in pools. My wisdom tells me that geo-sections will one day save the network from a terrible disaster. I can only hope that @dirvine will eventually see that ‘proof of redundancy’ is worth the development effort.

1 Like

This is not a topic for this discussion.

But to simply answer you, you have no proof that people will only use data centres or that most vaults will be in data centres. The goal and design principle of the SAFE network is to make it more profitable for people to farm using their spare resources. This will mean that data centre farming will only be done by those who feel that marginal profits are better than none and are will to pay out big dollars for very low ROI. And even if data centre was the only place farming was done then it is still distributed because people will be using data centres all over the world anyhow.

A data loss situation occurred due to the human nature of running a network at minimal cost to prolong the resource of maidsafe (in the case the resource of money).

How is maidsafe mentality different here from a farmer trying to make ‘marginal profits’? They are not!
The farms will all end up in the same locations if ‘proof of redundancy’ and rewarding geo-partitioning is not implemented.

‘Not a topic for discussion’, is simply another way of covering your ears and going lalalalalala maidsafe is already perfect, won’t happen, not possible, no need to prove anything to anybody, because we’re awesome. Why not just go ahead and delete all of my posts, as censorship is clearly the next stage of this obvious case of denial.

1 Like

That sentence doesn’t even make sense. Maidsafe here are not earning anything. So this is just fanciful.

Come on you know that is not what I was saying. I said this particular topic (discussion) is pinned for people to see a problem to be fixed is not the topic for discussion for geo diversity in the final system. If you want to continue this discussion then go back to the topic that already exists for it. We want people to see important information without having to read other discussions in the one topic. Was that topic the PARSEC and 99% topic.


So what will maidsafe be implementing to stop this reoccurring on alpha networks going forward?


Node aging [edit], parsec consensus, data chains etc. until safecoin we have the invitation server type stopgaps.


This wasn’t caused by the lack of node ageing, parsec consensus, data chains or invitations.

At the very minimum I would expect your infrastructure to change a little?
Anyone else feel like this isn’t an answer?

1 Like

No need to act as if people are not trying to answer your good ideas/questions. Its more like a misreading of the question because of not reading the previous posts.

@dirvine The question was more to ask if Maidsafe will on later iterations of alphas where only Maidsafe are running the nodes be spreading the nodes across 2 (or more) geo locations so that this problem has much less chance of happening. Or any other method to counteract the mass resetting of nodes.