I simply disagree with this sentiment. You are correct that the RAID setup may not be cost effective. Itâs hard to tell without a network to test. However, a RAID 10 setup would be both reliable and faster than a single drive.
Yes, the network provides the data security, but Iâm not talking about that. Iâm talking about the reliability of your node playing into how high up the tier it goes, and may affect ability to earn payouts. Itâd have to be tested as to what is the most cost effective.
So you are going to increase reliability from figures like 1 in 10^14 to 1 in 10^15 bit error rate. SAFE isnât reading your drives like a local area network server is being read. You might get 1 chunk read every minute or hour.
A chunk is up to 10^7 bits. So in worse case a single drive is going to have a unrecoverable in > 1x10^7 chunks read, assuming the read error occurs in the chunk storage and not in other areas on the disk.
As a experienced engineer and sys admin I could not justify using RAID for these issues in this application. Just buy a better drive.
To put it in perspective, at one second per chunk read, which is considerably faster than expected average chunk read rate, it will be over 2,700 hours between read errors which is like 3 per year maximum. Your vault is going to be churned well before that. And 1 error every 4 months isnât going to affect any reliability measure.
So yes RAID is unnecessary for vaults.
EDIT: By using that extra drive for another vault you will increase chunks being read and thus earning. This would far outweigh any benefits (if any could be found) RAID would have on earnings.
I played with storj and sia last one year on several atom computers. When power cuts take a long time to start several computers afterwards⌠(btw sia is disgusting for maintenance)
For this reason, I plan to set up a collocation server and use virtual machines on it for SAFE.
I was trending this way myself, but since I will have a FTTC connection within 12 months and an unlimited data plan for around AUD$90 a monthâŚit makes sense to explore an expandable home setup as these small machines get better all the time.
But yes, power outage is the biggest risk from home which data-centers can help mitigate with backup generators and dedicated links.
I got an old UPS with new batteries on my Modem, TV, Phone and a new one on my computer. So if we have a power cut of <15 minutes I can keep going without a hicup. Been through a couple already.
Still our power is very reliable really and have way more down time due to restarts and rearranging the setup. Even data centres have down time for servers/instances. They never quote Zero downtime, and the cheaper the instance the greater the chance of some down time.
It is always best to spread the storage across multiple vaults and have multiple locations for your vaults. The home vaults have the greatest earning potential with near zero costs of running the vaults. And data centre instances have the greatest of uptime and network speed but offset against the huge (in comparison to home) running costs.
With SAFE there is greater security and anonymity in numbers. The more vaults the better for the network. Personally I will have SBC vaults, my PC vaults (via VMs) and if profitable at all, data centre instances vaults.
We also have to keep in mind how HDDs work: They use ZBR (zone bit recording) and store more data on the outer tracks so theyâre faster when less capacity is used. For the Momentus tested above that means that this disk is limited to 40 MB/s when empty. As soon as data will be stored sequential transfer speeds decrease and most disks show just half the speeds when written completely full
As serious farmers and when using cheaper spinning disks, sticking to the SAFE ethos of âutilizing unused computing resourcesâ doesnât hold trueâŚas weâll be deliberately under utilizing these disks to achieve faster access times.
Maybe itâs useful when comparing costs of spinning verses SSDâs to halve the capacity of HDDâs when doing the cost comparisons.
Cable has about 1/2 hour worth of backup in their systems. I am sure FTTC will have a measure of power backup even if 15-30 minutes. Or have you already experienced otherwise?
And of course the internet lag time will play a bigger role than any differences between inner and outer tracks. And by the time SAFE is live for a while SSDs will be in more home computers than currently is.
⢠Unlike the old network, the fibre based FTTC network is unlikely to work in the event of a power outage
⢠Fibre networks require power both at the exchange and in the home to work
⢠This means all devices connected to the nbn⢠network are unlikely to work in a blackout
⢠If there is no power to the core nbn components in the street or our exchanges, there is nothing you can do
to restore your connection
⢠All nbn⢠network core components have battery backup in them â however these have a finite life span
So I guess those built in SBC batteries are indeed worthwhile to keep the vaults running if the NBN has a little juice to keep the connection for a while and if it goes down fully allows for a graceful shutdown of the OS.
I agree with you that video is not the best option but it was the one I found at that moment. Thanks for the link to the Armbian tests. From what I saw in that video I think the test gives a fair comparison where it shows differences between the products and with not too much of significant bottlenecks. Feel free to disagree and please give specific reasons to way you think the test method dont give a fair view of performance.
Either way the test numbers for Rock64 and also Odroid-XU4 seems very competitive and promising, from the Armbian forum tests:
Random IO in IOPS Sequential IO in MB/sec
4K read/write 1M read/write
Iâm not disagreeing that you may be right. Iâm saying that we donât know you are right. Even WD Red drives have premature failures. With a 4 disk RAID 10 setup, you would be virtually bulletproof against hardware failures and read speeds would be essentially doubled.
As far as I am aware, both reliability and speed to present a chunk are important factors to being considered a good node that will be promoted, and thus, more likely to earn payments. You are quite correct that this may not outweigh the difference to simply putting up another node, but we wonât know for sure until the network is live.
Someone else mentioned collocation. You could also rent a bunch of cheap VPSs @ ~$3-5/month and send them your own HDDs to be used in them. Many are willing to accommodate that, especially with large quantity orders.
Regardless of the SBC chosen, this would seem relevant
Back on topic (SBC storage and not filesystem performance): only reasonable way to compare different boards is to use same filesystem (ext4 since most robust and not that prone to show different performance depending on kernel version) and performance governor, eliminating all background tasks that could negatively impact performance and having at least an eye on htop.
If you see there one CPU core being utilized at 100% you know that you run in a CPU bottleneck and have to take this into account (either by accepting/believing that a certain SBC is simply too weak to deliver good storage performance since CPU bottlenecked or by starting to improve settings as itâs Armbianâs or my âActive Benchmarkingâ approach with benchmarks then ending up with optimized settings â thereâs a reason âourâ images perform on identical hardware sometimes even twice as fast as other SBC distros that donât care about whatâs relevant)
Interesting info in this thread. Wanted to add that pci-card based SSDâs are much lower latency that USB3 SSDâs â from what I understand. Maybe something to consider.
Beware that system-to-disk latency is different from system-to-network latency.
Yes one may affect the other, but far more likely on a local network than across the internet as is the case for SAFEnet.