Tiered Storage in Farming Rig

Excellent reply - thank you … always been impressed with your uncanny ability to think things through to their logical conclusion … looks like the seven years in the wilderness was not wasted time.



No, it won’t reduce it in a cost-effective manner.

My statement assumes zero latency physical storage. Basically the current software was designed around security before all else and then speed of completion, so it’s actually quite a simplistic design. If we had more resources, and given the benefits of hindsight, I think a very different design would emerge, something more ultra low latency hedge fund like with hard realtime latency guarantees.

With SAFE it’s all about timely delivery of content to earn coin - the chunks are effectively the gold backing the currency, and you can’t ASIC that.

It’s still an open question whether the best coin strategy is sheer volume from a small set of nodes, even if storage is very cold, with a large warm storage cache on front, or whether as many nodes as possible without routing through some central core servers is better. I don’t think we know that answer yet, though we all have our hunches.

One of things I keep pestering David to do is to do a deal with someone like OVH who have a truly enormous spare server capacity. Basically they’d fire their spare capacity at SAFE, and would earn coin for it.

Can you prevent that? No. But as one of the biggest European networks all European and US East coast SAFE users get ultra low latency content, and terabits per second of capacity. I also know for a fact that Google have looked into similar ideas for their spare server capacity.


Have you tested with something like Fusion IO PCIe memory tiers? … wicked fast (but expensive) … I sometimes combine them with SSDs and HDDs and plenty of DRAM in a four tier storage architecture and let ZFS handle the intelligent caching to the 10G ports … works very well for large video files (4K uncompressed) to provide continuous playback for editorial, VFX and color correction workgroups but given the small size of the SAFEchunks may not be relevant.

Be very interested to know how your process modeling goes around the optimum architecture for increasing chances of PUT and GET hits … will you publish your findings in due course?

Interestingly, the whole of FB is based on PCIe memory tiers … seems your I/O architecture might be more suited to the ‘fast and furious’ OLTP trading desk world rather than the slowly plowed furrows of the local farm.

Precisely. That’s what I was getting at in the “Agribusiness” and Enclosure Movement comment … given the propensity of Capital to transform it’s wealth into competitive technological advantage quickly and efficiently (some might add ruthlessly), where will that leave us peasant farmers hoeing the land with a worn out tractor or two?

Any thoughts on what Adam Smith would have to say about the notion of a ‘frictionless distributed digital economy’ dominated by the likes of the Winklevoss twins and their cohorts at Google, AWS, FB and Azure? If there’s money to be made in turning mega Cloud based storage factories into mega SAFEcoin conversion farms, I wonder who will come to dominate the factors of production first and use that technological advantage to continue turning wheat into dough at the expense of the peasants putting bread on their tables? (That’s enough mixed metaphors for one sitting, methinks.)


For folks like Google PoW mining makes sense as they can recover some of their investment in that h/w, but it doesn’t work any more because regular servers just don’t deliver any more. PoS is also not ideal because if they need the servers for their own business activities (and presumably those pay better) they have to delete the data stored on them.

Which is I suppose why Google could be interested in turning their idle server pools into an Ethereum Matrix (mixed PoW/PoS).

To me that doesn’t seem like a good idea:

  1. You don’t need to pay a horde of execs to run a MaidSafe farm. As a shareholder I’d go nuts if I saw they’re using the h/w bought with my money to mine MaidSafe! (I’m not a Google shareholder and never will be).
  2. There’s no way they can get any decent ROI on that (my arguments are in Feasibility of datacenter farming (and the risk of farmer centralization))
  3. In the very unlikely case that the approach could work for them, that would defeat the whole purpose of MaidiSafe. (Additionally, a huge farming player (or several) should be seen as risks to the well being of the network.)

@FuLl – This is one of the best threads I’ve seen on maidsafe.org, although it’s gone on a couple of tangents to your original question, it’s all valuable info for farmers.

My related question is: what about RAID-1? Am I correct in assuming a RAID-1 setup would establish a farming rig’s reputation better than (total storage caps being equal) a single SSD (or HDD), yet say one step down from a ZFS setup?


Agreed - great thread. IMO, home users have the advantage of very low costs - they are selling idle space which otherwise earned them nothing. Hopefully, there will be profit to be made by both big and small operations, for different reasons.


RAID is probably overkill for SAFE. If you lose a drive, you simply have to redownload all the chunks on it again.

BTW ZFS can do RAID too, but far more intelligently and reliably than any hardware or software RAID. A very common ZFS configuration is mirror volumes, so you add capacity in units of two drives. ZFS will regularly check all content for bitrot, and use the good mirror to heal the bad mirror. Bitrot only happens 1e-14 of the time, but that’s only a 100Tb to be read and written per bit rot on average, not a lot by today’s standards.


I would imagine RAID10 would do better than RAID1, if only for read performance.

Of course it’d do better, but at a 100% higher cost.
(For a moment we can put aside the fact that “faster reads” most likely won’t create any positive economic effect to the owner).

It’s a question that needs a balanced answer. It’s all about tradeoffs.