After a few months of coincidental and compounding backup and hardware disasters, I have been wondering recently if it would be possible to run a conveniently configured version of SAFE on a private LAN that could be an alternative to a RAID setup for safety. I am thinking of adding say 5 or 6 RPis + HDs (SSDs?) To my existing collection of about 5 or 6 Linux boxes of varying vintage.
I am thinking of a simple duplication of chunks on two different machines rather 6 or whatever copies on six different machines.
This would be a nice exercise for me and allow me to have some redundancy without needing to set up a RAID array. If it worked I could even spread the LAN over multiple buildings (in the same village http://lev.com.au) for off-site backup.
Do you want the other features of SAFE like messaging, IDs, etc
There was a distributed storage system that some 12 years ago I used as a demonstration of distributed file storage across a lan or a WAN and how a company could reuse its old PCs to implement it and actually save on hardware costs and running electricity costs while providing redundant storage across many locations, and provide near lan speeds for each location.
It was if I remember correct the GFS and it has many configurations to replication counts, store sizes and so on.
The Global File System Configuration and Administration document provides information about configuring and maintaining Red Hat GFS (Red Hat Global File System). A GFS file system can be implemented in a standalone system or as part of a cluster configuration. For information about Red Hat Cluster Suite refer to Red Hat Cluster Suite Overview and Configuring and Managing a Red Hat Cluster .
Yes it is/was available for non-enterprise use too.
So some of the answers to questions about the use of your data storage will answer which style of system would be best.
But by the sounds of it the SAFE network will be slower than GFS2 and also GFS2 acts like a typical file system allowing concurrent updates of a file record, redundancy to the level you want, over 12 years in development and used elsewhere.
For your needs SAFE is both overkill and too young in development and SAFE is actually aimed at a different market place. One with security upmost and multiple users etc.
Make sure you check out GFS2 as that is the one I used back then and I’m not sure if GFS is used for “Global File System” anymore. There is another filesystem (Gluster) that has GFS as its acronym
I had a quick look - it appears GFS2 is just used to make a very big FS in a cluster - it is not doing what I understand SAFE to do which is to save multiple copies of file chunks on different nodes . .
Maybe I will just have to think of a stopgap until SAFE goes live . .
I cannot remember if its on a file basis or a block basis. So not sure if the files are replicated or if its on a block basis. (512 byte or 4K blocks)
But it definitely keeps multiple copies of your data and these are spread across the data stores.
You specify the #copies, you provide the stores (usually one per machine)
And you can even have it such that a copy is kept at each location. IE maybe have 1-5 machines at each location and ensure one copy of each file/block is kept at each location. OR have 8 copies and say have 20 locations.
The beauty is that no matter what if you request data it is the latest copy of the data even if another location just updated a block.