I don’t doubt it. The question is how do you know which one and where to get it?
The focus of the list at https://wiki.debian.org/CheapServerBoxHardware is consumer-ready boards - all boards listed are sold together with a case, and boards recommended works as-is with Debian (i.e. not requiring a custom-compiled kernel or e.g. an Ubuntu-shipped non-free binary blob).
Correct, Olimex LIME2 boards are built around a 32-bit ARMv7 SoC.
If you want a cheap board that works stable today and will be supported by the upcoming Debian stable release (i.e. for at least 3 more years) then I recommend the Olimex LIME2.
If you want a cheap board that requires either bleeding edge OS or “cheating” e.g. using custom non-mainlined Linux kernel and/or non-free binary blobs and/or non-distro-maintained bootloader, but is likely to become stable within the next year - and you are prepared to make a casing for it yourself, then I recommend the Olimex A64.
ARMv5 devices - most boards in first wave of “desktop-capable” cheap ARM boards - are 32-bit, and will not be supported in upcoming stable Debian release.
ARMv6 devices - mostly only RPi 1 - are 32-bit, and is not (optimally) supported in Debian at all.
ARMv7 devices are 32-bit, will be supported in upcoming stable Debian release.
ARMv8 devices are 64-bit, will be supported in upcoming stable Debian release, but is unsupported in current stable Debian and barely supported in bleeding edge Debian.
I disagree that 32-bit is problematic in general. Depending on use case, any board available is problmatic, so to sensible discuss further please share more details of the concrete use case.
Does anyone know if these specs for above hardware will be a good fit for like one board with 4 Hard drives, with a headless setup. In my head I’am thinking of specs like 1,6-2,0 GHz cpu, USB3 (with USB3 to SATA-adapter), x2 + USB hub or x4 USB ports for connecting 4 HDD, 4GB of ram, gigabit ethernet, overkill?
Specs above maybe a little towards next gen single board computers.
Low latency seems something to strive for when it comes to future network hardware optimisation. Love this discussion, remember discussing hardware with @happybeing in 2014. When we now have this discussion it gives me so much motivation to be thinking of hardware for hopefully much soon alpha 3. Does anyone have a general idea on how many harddrives one board can support with low latency and good throughput for about 50-100$?
That Olimex A64 looks quite good but the raspberry pi model 3b seems almost as good except from ram and ethernet but with better connectivy, 4x USB.
I’m intrigued by the orange pi, does it fit into a rasberry pi case?
Cheap boards are generally built around SoC optimized for use in phones, i.e. not optimized for multiple parallel fast data pipelines.
For high-speed RAID you need a “server board” rather than a “cheap board”. Price range for server boards is very different.
Most promising(OSHW, ECC memory) relatively cheap ($240) storage-oriented board sort-of-currently available (is on back-order) is arguable Helios4: https://kobol.io/helios4/
Thanks, that looks very interesting. It makes sense to go for server boards. It will be exciting to see what load the network will bring depending on how much storage you provide and so on.
Will probably look for something like Helios4 when SAFE reaches beta, maybe I start experimenting with something cheaper during alpha 3 test network.
Did some research and found that Rock64 looks quite promising with Open Media Vault (NAS).
Rock64, average with USB 3.0 and SSD 93MB/s, $59,99-$79,99
Raspberry Pi 3 model B, average with USB 2.0 and SSD, 10,3 MB/s, about $35-40
Banana pro, average with SATA and SSD, 20MB/s, $60-$65?
UDOOx86 Advanced Plus, USB 3.0, SATA, With M2 and SSD, 98 MB/s $174
There will probably be much to learn for me about networks and storage for upcoming alpha 3 test networks.
I’m not sure if for a Safe Vault a RAID setup is the better choice.
Advantage: staying available when 1 hard drive fails.
Disadvantage: extra hard drives to pay for delivering the same amount of data.
And does it make a difference in what board to choose if one goes for the non RAID solution, but with the same amount of hard disks?
for non-RAID use, I would recommend running 4 independent LIME2 boxes, each with a single disk.
Those recommending other boards than OSHW boards from Olimex, could you please try clarify your reasons for that? Otherwise I suspect you pick based on hyped brands rather than technical values.
…and when you list e.g. speeds, please provide link to source
This is a quite unknown territory for me, so just looking for different options. Of course I will provid the link.
But it would also be possible to go with for example raid 0 or maybe raid 10 or similar, to get larger storage space?
Many things are possible - I believe the better question is what is beneficial.
I dare assume in this forum that the use of these boards are for SAFE network. In what way is it beneficial to burden cheap boards with computing any form of RAID as opposed to contributing multiple boards to the global storage pool?
In other words - why introduce local RAID when SAFE implicitly does both spanning (i.e. RAID0) and duplication (i.e. RAID1)?
If nodes are judged based on reliability and rewarded on speed, something like a RAID 10 setup would make sense to satisfy both those requirements and could lead to more payouts.
True. Will be interesting how you evaluate the size of the network(!) speed boost from burdening a cheap board with a) accessing multiple disks on it limited data pipelines and b) doing RAID1 computation - and then resolve the amount of the reward, ensuring that it is high enough to compensate for the additional complexity in maintenance compared simply booting up two insances.
Some additional factors (not exhaustive):
- amortization of purchase costs
- maintenance costs
- running costs (power)
- unreliability costs (disk failure/replacement cost + node age loss cost)
- reward factors per different combinations of these criteria
Its a multidimensional problem
It will be great to harvest data during the tests based on different configurations and environments, because what works best for solar powered mobile broadband (if anything) will be different for what works best for an unlimited broadband reliable mains connection, or other broadband and power reliability situations.
Thanks, @happybeing, for clarifying the cost equation being even more complex.
Personally I prioritize OSHW-certified hardware, which reduces the options that excites me.
I recommend https://linux-sunxi.org/Sunxi_devices_as_NAS - it is written for boards based on Allwinner SoC but is a good read also if your interest is in other (less free but some better performing) boards - see esp. its link to Armbian tests https://forum.armbian.com/topic/1925-some-storage-benchmarks-on-sbcs/?tab=comments#comment-15265
I now watched the video linked from @tobbetj - thanks for sharing, but I am sceptical to the methods applied by the author of that video (and am a bit annoyed by video as format for discusion: not easy to quote or link to details, and not possible to quickly “skim through” - I need to consume in the pace the author choose to deliver, including repetitive “now I am back again again on my Windows desktop…” comments). I dearly recommend to read stuff written by tkaiser from Armbian (like the one referenced above), that seems more in-depth to me.
Exactly. SAFE is the RAID. I would expect only archive vaults to even consider a RAID setup. Obviously others will do it anyhow, but as you say its not needed and actually would have the potential to (sometimes) slow down the process and will raise the cost of a vault unnecessarily.
Its generally faster to have any vault on the one drive.
The greatest latency will be the distance from your computer to the average of the section’s computers.
When was the last time you had a disk fault? RAIDs are generally for large storage and the ability to have more volume storage than the disk size at good speeds and hot replaceable when a drive does show fault. The general disk is not expected to have a single non-recoverable fault in terms of over a year or years. So RAID is an expense (power + extra drive) that is not warranted for SAFE. SAFE itself has all the benefits of RAID, so personal RAIDed vaults are not needed for data security. Your earning will be more affected by internet link dropouts and power issues than any disk drive issues.
I bought an old 48 port POE switch to play around with. My thoughts are 1 board + 1 HDD x 48 with a short Cat6 supplying network and power.
A single point of power failure (as is FTTC) but less chance of vault downgrade due to other factors.
Will your POE switch supply enough power? Thats my main concern.
Also what is the bandwidth of the backbone in the switch is a secondary concern. Because its for SAFE and your internet link speed I doubt the switch’s bandwidth will be a concern.
I would have thought having at least 2 switches, perhaps 2x24 port switches might have been a better choice since it takes time to replace a switch if its your only one.
Other thoughts - Thats a hell of a setup you are planning