Private<-->Public SAFEnetwork

**Edited post for clarity, hopefuly
**Edit This scenario would be framed with the following in mind:

SAFEnet aims to rid the world of servers within 10 years, an ambitious goal for sure. If this goes according to plan, the speed of the SAFE network would be super quick, we’d be running apps straight off the network over fibre. Wireless and mobile would be way better, battery life would be great, every device with storage would have a vault etc

So it’s on the way to that sort of environment I’m placing this scenario. Currently there’s a lot of hype around the Private/Public hybrid cloud (files)…but the next move (for conservative enterprises) might be some kind of Private/Public SAFE network scenario…thats how this scenario is framed. The mindset of storing ‘files’ is gone…it’s all chunks now.

Larger organisations are rightfully wary of trusting their data to public cloud infrastructure and so they utilize this same OpenSource software to build their own Private Clouds and sometimes go Hybrid with a Public cloud.

Fast forward say 5 years from now, the business has recognised the benefits of SAFE and have run a test network on their client machines. They want to go 100% internal SAFE, but how do they now get redundancy?

Running a private SAFEnetwork over one large site, provides no redundancy in the event that site is wiped out. If running several sites it wouldn’t be a problem.

Conversely, if you ran a single site business entirely on the Public SAFEnetwork and the communications goes down, your relying on local vaults having all the chunks.

I’m wondering, could the public SAFEnetwork provide redundancy for their private SAFEnetwork (in a one site scenario), similar to how Hybrid Private/Public cloud works now?

I’m not sure I understand this correctly.

Redundancy of one’s data on the SAFE network is guaranteed by the network which makes four copies of each chunk.

If you’re wondering about a local copy of one’s data, then you’re talking about redundancy of your local data before it’s sent (uploaded) to the SAFE network. That data is not part of the SAFE network and for that you’d naturally use whatever methods of data protection redundancy you use today (RAID, backup, mirroring, and so on).

Depending on your requirements you may want to use a combo of various approaches which are too numerous to mention. Your data would be very redundant on the SAFE network, but your recovery time and cost required to fetch it may be (or may not be) high. This cost includes the cost of (down)time required to download the files from the network.
Because it still takes a lot of time to download a crapload of data and the cost of downtime is significantly higher than for small enterprises, large enterprises are usually (there will be exceptions) going to have a local copy separate from the copy on the SAFE network.
There will be enterprises that will run directly off the SAFE network, there will be enterprises who will use a variety of approaches even for a single app (depending on the cost/revenue/regulations/etc).

I’ll try and explain myself a little clearer.

All servers are vulnerable, whole files are vulnerable, viruses and malware suck etc

So the business eliminates all servers and runs a 100% in-house SAFEnet, say 1000 machines in one location. Everything is great, data cannot be stolen, data is fully distributed and the business runs on in-house created apps.

Now if the business had other locations with 100% in-house SAFE but one of these locations was destroyed the data should have been sufficiently replicated.

I’m wondering about the one location business with 100% in-house SAFE, how do they get redundancy? Maybe you can have some offsite machines that are part of the network that can be configured to store all the chunks…not sure.

I would think they would see the light and want to integrate into the global SAFEnetwork.

If they then go 100% global SAFEnetwork and lose their communication(s) links, they are reliant on the locally cached data to continue business during the outage. Would there being sufficient cached data to access all their stuff?

Furthermore, what if the company decides they want to go with the 100% in-house SAFEnet, but somehow replicate into the global SAFEnet for redundancypossible?

Hope thats a little clearer :slight_smile:

So the business runs completely off the SAFE network and wants a copy of their data.
I’m not entirely convinced this is how people would use the SAFE network, but let’s say there’s a streaming app that serves independent artists. They may want to use the SAFE network for that, but also need a copy of their primary data.

By definition this copy would be better protected (e.g. different custodian/admin), otherwise all other risks (security, etc.) would be shared with the first copy since it’s stored on the same (SAFE) network.

You could fairly easily create a different copy zipping a bunch of original files together or use any other method you like.

But I wouldn’t put this copy on the SAFE network because I’d have to pay for that more (this being a second copy not used for production, I have no use for it on the SAFE network - the only time I’d use it would be to recover one or more production copies of my files).

Even placing the 2nd copy on SAFE would not make sense to me, but you could do that.
As long as the network doesn’t “deduplicate” identical chunks, you could just put a copy your files in a different directory and you’d have 2 copies. But as I said above, that doesn’t seem to offer much redundancy to me. If there’s a problem with the network (however unlikely that is), then your redundancy is gone.

That depends on your data set size. If your HDD is 1TB and you put 900 GB on SAFE, you could certainly have quick access to this “cache”. But there’s no difference between “cache” and simply having a local copy without the additional cost of having another copy on the SAFE network.
With the caching approach you’d have a total of 3 copies (production data on SAFE, local cache of a replica, and a replica on SAFE). That doesn’t seem very economical to me and I doubt enterprises will use it like that.

I don’t see why not, except that it seems expensive. As I mentioned above, if I have 50 TB on SAFEnet for production, I want to have a local copy on a different media that does not depend on SAFEnet. If I make this copy on SAFEnet, then that doesn’t count, so need to add a local 50TB “caching” layer (because I can’t know which of the files I’ll need to restore - maybe I’ll need all of them). That takes me back to my earlier question do I still want to have a 2nd copy on SAFEnet? Probably not, although some people may want to do that (e.g. if they’re afraid the first copy will be “temporarily” taken offline by their friendly government officials, they may want to make a zipped/encrypted archive of all their data and keep a cached copy on site for recovery purposes, but also keep a copy of the zipped archive on SAFEnet in case they need want to quickly release those files; but these are weird use case scenarios which are probably few and far in between.)

P.S. As SAFEnet grows, matures and becomes better understood it would be useful to document various scenarios because people will ask about this all the time.

Yeah, were not on the same page unfortunately.

SAFEnet aims to rid the world of servers within 10 years, an ambitious goal for sure. If this goes according to plan, the speed of the SAFE network would be super quick, we’d be running apps straight off the network over fibre. Wireless and mobile would be way better, battery life would be great, every device with storage would have a vault etc

So it’s on the way to that sort of environment I’m placing this scenario. Currently there’s a lot of hype around the Private/Public hybrid cloud (files)…but the next move would be some kind of Private/Public SAFE network scenario…thats how this scenario is framed. The mindset of storing ‘files’ is gone…it’s all chunks now.

So, no files and no requirement to store files, only chunks.

Edited the OP hopefully for more clarity :slight_smile:

In the long term I agree - it will be as you say or even something more advanced.
And some users will get there way ahead of others - depending on their specific needs and how well SAFEnet and other approaches will work.

My feedback was centered around mainstream enterprise use case scenarios I expect to see on first production releases of MaidSAFE. The majority will be slow. It’ll take 1-2 years just to get SMB enterprises to start trusting SAFEnet with pre-encrypted backups, let alone production data… For enterprises, at least 2-3 years will pass before a 10 out of Fortune 500 companies adopt SAFEnet.

OT: if SAFEnet is good, the adult video industry should become the first major user…
That would be a very important milestone, a validation by a for-profit enterprises.

Understood and thanks for your input

I’d love to be in a team playing around with this stuff on a decent sized pilot project.

1 Like

Likewise!

By the way I don’t know if “multi signature” type of data control will be possible (or is possible now).
If there’s only one key and both the first and second copy of production data are on SAFEnet, then the key owner would be a single point of failure. Another risk is the risk of accidental (or deliberate) deletion.
Modern systems (say, Windows Active Directory and NTFS ACLs) provide ways for role based management, multiple accounts can have management rights, keys can be recovered, and so on.
In absence of such multiple ways to fine tune data access I think it’s going to be difficult to immediately expect that many enterprise users put all of copies of their data on SAFEnet.

However that doesn’t prevent us from brainstorming about use case scenarios where what we (will soon) have is good and economical enough. :slight_smile:

I think the public/private cloud concept breaks down in the face of a successful SAFE network. If a company or group develops sufficient confidence in the technology represented by SAFE, trying to set up a separate clone system will not be advantageous, I think. You’d lose the a lot of the natural advantages of the technology by trying to use it on a local or even multi-local basis.

There will be transitions pains, but I’m envisioning that in the future someone trying to set up a separate system duplicating SAFE locally would be viewed as putting up an atmosphere dome over their company and saying “This is where our air begins and your air ends.” Yeah, it could be done, but a lot of the advantages of SAFE would lost, and much less gained. That’s how it looks to me, anyway.

Assuming 10 years from now bandwidth will be plentiful, comms/power always up and SAFEnet in version 17.9 and have dozens of Fortune 500 customers, yes.

Right now or in 2015, no.
a) Cost (unless SAFEnet is cheaper than tape and bandwidth extremely cheap)
b) Crazy admin / admin hit by a bus
c) Pilot error / fat finger
d) Slower recovery performance (vs. local copy) cannot meet RTO
e) Legal requirements
f) etc.

Yet, it doesn’t matter.
Personally as a contributor and crowdfunder, I don’t care and it doesn’t make sense to rush people into it before their time because they simply won’t do it.
Innovators will use it first, then early adopters, and so on. In terms of organization size it’s going to be the same old drill as well: freelancers, small projects, SMBs, and finally enterprises.

Experience has humbled me as to taking any of my own future predictions seriously. But that hasn’t stopped me from trying. :blush:

2 Likes

Yes, I was looking very closely at OpenStack and that would already disintermediate a fair chunk of IT staff, with DevOps taking over somewhat. But now I know about SAFE, that makes even more sense and boy…how many IT and security staff are going to be out of a job with that?

It’s really going the way of the DevOp/NoOp guys now with SAFE. Program in C++, Javascript…know the SAFE architecture inside-out and off you go. Gonna be a lot of re-training going on.

I’m not sure larger businesses are going to go purely GlobalSAFE though (data retention laws etc), hence my OP.

For one site operations (1,000 for example) I’m thinking servers can still play a role, either with a couple of offsite containerised replications or via established data-centres like Rackspace.

On the remote location servers, you could run something like CoreOS that utilizes Docker baremetal containers (leveraging Linux kernel containment ) and run your 1,000 nodes as an active failover network that way.

CoreOS is now offered in the datacentre via Rackspace OnMetal and so you could run your 1000 nodes of in-house SAFE as containers in a datacentre as an active failover network also.

Certainly more real world drive space required, but the cost of extra disk would probably be well offset by the reduced staffing requirements.

The other fascinating thing is SAFEcoin…if you left that in the code, how could an internal crypto token system be leveraged internally and would SAFE actually function without it in a private SAFEnetwork? :slight_smile:

@ janitor said: I don’t care and it doesn’t make sense to rush people into it before their time because they simply won’t do it.
Innovators will use it first, then early adopters, and so on. In terms of organization size it’s going to be the same old drill as well: freelancers, small projects, SMBs, and finally enterprises.

I agree it’s unlikely larger companies would go all in, but there will be some that run pilots in 2015. There would be nothing stopping a 1,000 seat organization running a pilot in parallel I would think, at the least firing up a chunk of DataCentre containers and simulating their operation and building a few apps on top.

Some great opportunities for early adopters in the whole SAFEspace ecosystem though…what a disruption it’s going to be. :slight_smile:

Yes, the devs are going to be busy writing tools to move systems across…what a great time to be a programmer. I dont think I could handle re-training in development now, I’d rather go gold prospecting I think :slight_smile:

1 Like

I agree. There really is no reason to abandon local ethernet networks, etc., for day-to-day ops in most cases.

And maybe I can see a large operation setting up a SAFE Clone network on a smaller scale. It could even mix trust/no-trust models, being internal. I guess that could make for better internal security. It could have built-in employer spying, yet have everyone else be completely secure.

I’m coming around a bit, but it makes my head hurt.

Yes it’s intriguing to think about the re-purposing that’s going to take place with the various Maidsafe libraries that have been released.

In the local network scenario, given the use of RUDP and the simple topology, maybe network switching/routing and even the desktops themselves can be simplified and made cheaper…a SAFElan type product utilizing the libraries. I remember seeing David Irvine say that Hollywood should use their RUDP to move their movies around for instance.

1 Like