Private Beta as a RAID Alternative?

After a few months of coincidental and compounding backup and hardware disasters, I have been wondering recently if it would be possible to run a conveniently configured version of SAFE on a private LAN that could be an alternative to a RAID setup for safety. I am thinking of adding say 5 or 6 RPis + HDs (SSDs?) To my existing collection of about 5 or 6 Linux boxes of varying vintage.

I am thinking of a simple duplication of chunks on two different machines rather 6 or whatever copies on six different machines.

This would be a nice exercise for me and allow me to have some redundancy without needing to set up a RAID array. If it worked I could even spread the LAN over multiple buildings (in the same village for off-site backup.

Is this idea viable at all?


What are the features you want?

  • encryption?
  • randomness of storage?
  • security from thief stealing your dives?
  • multiple users?
  • Do you want the other features of SAFE like messaging, IDs, etc
  • ???

There was a distributed storage system that some 12 years ago I used as a demonstration of distributed file storage across a lan or a WAN and how a company could reuse its old PCs to implement it and actually save on hardware costs and running electricity costs while providing redundant storage across many locations, and provide near lan speeds for each location.

It was if I remember correct the GFS and it has many configurations to replication counts, store sizes and so on.

The Global File System Configuration and Administration document provides information about configuring and maintaining Red Hat GFS (Red Hat Global File System). A GFS file system can be implemented in a standalone system or as part of a cluster configuration. For information about Red Hat Cluster Suite refer to Red Hat Cluster Suite Overview and Configuring and Managing a Red Hat Cluster .

Yes it is/was available for non-enterprise use too.

So some of the answers to questions about the use of your data storage will answer which style of system would be best.

Better description

In computing, the Global File System 2 or GFS2 is a shared-disk file system for Linux computer clusters. GFS2 differs from distributed file systems (such as AFS, Coda, InterMezzo, or GlusterFS) because GFS2 allows all nodes to have direct concurrent access to the same shared block storage. In addition, GFS or GFS2 can also be used as a local filesystem.

Not necessary in this case.

I guess so - as long as there is duplication somewhere for redundancy.

Not necessary in this case - I should have physical security over the hardware.

Not likely.

Not critical.

I thought if a Beta could be easily configured it would be an interesting / useful exercise for me.

I will check out GFS - thanks!

Do so for your own education and enjoyment.

But by the sounds of it the SAFE network will be slower than GFS2 and also GFS2 acts like a typical file system allowing concurrent updates of a file record, redundancy to the level you want, over 12 years in development and used elsewhere.

For your needs SAFE is both overkill and too young in development and SAFE is actually aimed at a different market place. One with security upmost and multiple users etc.

Make sure you check out GFS2 as that is the one I used back then and I’m not sure if GFS is used for “Global File System” anymore. There is another filesystem (Gluster) that has GFS as its acronym

I had a quick look - it appears GFS2 is just used to make a very big FS in a cluster - it is not doing what I understand SAFE to do which is to save multiple copies of file chunks on different nodes . .

Maybe I will just have to think of a stopgap until SAFE goes live . .

You specify the number of copies you want.

I cannot remember if its on a file basis or a block basis. So not sure if the files are replicated or if its on a block basis. (512 byte or 4K blocks)

But it definitely keeps multiple copies of your data and these are spread across the data stores.

You specify the #copies, you provide the stores (usually one per machine)

And you can even have it such that a copy is kept at each location. IE maybe have 1-5 machines at each location and ensure one copy of each file/block is kept at each location. OR have 8 copies and say have 20 locations.

The beauty is that no matter what if you request data it is the latest copy of the data even if another location just updated a block.


OK then, that sounds like just what I want - I will have a closer look when I get home now.

I guess it becomes academic though when SAFE goes live . .

Thanks again!

1 Like

No reason your local network could not just use the SAFE network when it goes live. Greater storage capability and potential for your various vaults to earn

Yes, exactly what I have been intending . .

1 Like