Best Single-Board Computers Under $200


#62

It supports 802.3af POE

which is fine for network devices that require up to around 13 watts of electrical power

Enterasys SecureStack C3G124-48P has a

Power Consumption Operational 583 Watt

Which is not too bad if maxing out the setup

I presuming a POE board like the MOD-POE-V2 would be the interface from switch to board

The maximum output is close to 10W. This means ~800mA for the 12V output mode; ~2000mA for the 5V output mode.

Of course if this setup is feasible, it would be a gradual buildup of capacity :slight_smile:


#63

I simply disagree with this sentiment. You are correct that the RAID setup may not be cost effective. It’s hard to tell without a network to test. However, a RAID 10 setup would be both reliable and faster than a single drive.

Yes, the network provides the data security, but I’m not talking about that. I’m talking about the reliability of your node playing into how high up the tier it goes, and may affect ability to earn payouts. It’d have to be tested as to what is the most cost effective.


#64

So you are going to increase reliability from figures like 1 in 10^14 to 1 in 10^15 bit error rate. SAFE isn’t reading your drives like a local area network server is being read. You might get 1 chunk read every minute or hour.

A chunk is up to 10^7 bits. So in worse case a single drive is going to have a unrecoverable in > 1x10^7 chunks read, assuming the read error occurs in the chunk storage and not in other areas on the disk.

As a experienced engineer and sys admin I could not justify using RAID for these issues in this application. Just buy a better drive.

To put it in perspective, at one second per chunk read, which is considerably faster than expected average chunk read rate, it will be over 2,700 hours between read errors which is like 3 per year maximum. Your vault is going to be churned well before that. And 1 error every 4 months isn’t going to affect any reliability measure.

So yes RAID is unnecessary for vaults.

EDIT: By using that extra drive for another vault you will increase chunks being read and thus earning. This would far outweigh any benefits (if any could be found) RAID would have on earnings.


#65

I played with storj and sia last one year on several atom computers. When power cuts take a long time to start several computers afterwards… (btw sia is disgusting for maintenance)

For this reason, I plan to set up a collocation server and use virtual machines on it for SAFE.


#66

I was trending this way myself, but since I will have a FTTC connection within 12 months and an unlimited data plan for around AUD$90 a month…it makes sense to explore an expandable home setup as these small machines get better all the time.

But yes, power outage is the biggest risk from home which data-centers can help mitigate with backup generators and dedicated links.


#67

Yes home vaults with this will fly really.

I got an old UPS with new batteries on my Modem, TV, Phone and a new one on my computer. So if we have a power cut of <15 minutes I can keep going without a hicup. Been through a couple already. :slight_smile:

Still our power is very reliable really and have way more down time due to restarts and rearranging the setup. Even data centres have down time for servers/instances. They never quote Zero downtime, and the cheaper the instance the greater the chance of some down time.

It is always best to spread the storage across multiple vaults and have multiple locations for your vaults. The home vaults have the greatest earning potential with near zero costs of running the vaults. And data centre instances have the greatest of uptime and network speed but offset against the huge (in comparison to home) running costs.

With SAFE there is greater security and anonymity in numbers. The more vaults the better for the network. Personally I will have SBC vaults, my PC vaults (via VMs) and if profitable at all, data centre instances vaults.


#68

Olimex LIME2 (and cheaper LIME) boards support Lithium Polymer battery backup, as a very simple UPS :slight_smile:


#69

You are right, but at least I intend to put some of my profit back into SAFE by providing maximum stability even if I am at a loss…

Of course I will also start vaults from my home :slight_smile:


#70

Unfortunately FTTC goes down with the grid, unless they plan on having backup generators at the exchange in the future.


#71

Really no point in the situation I’ll be in, Fibre to the Curb in Australia goes down with the grid as will the POE Switch and Broadband Router.


#72

We also have to keep in mind how HDDs work: They use ZBR (zone bit recording) and store more data on the outer tracks so they’re faster when less capacity is used. For the Momentus tested above that means that this disk is limited to 40 MB/s when empty. As soon as data will be stored sequential transfer speeds decrease and most disks show just half the speeds when written completely full

As serious farmers and when using cheaper spinning disks, sticking to the SAFE ethos of ‘utilizing unused computing resources’ doesn’t hold true…as we’ll be deliberately under utilizing these disks to achieve faster access times.

Maybe it’s useful when comparing costs of spinning verses SSD’s to halve the capacity of HDD’s when doing the cost comparisons.


#73

Cable has about 1/2 hour worth of backup in their systems. I am sure FTTC will have a measure of power backup even if 15-30 minutes. Or have you already experienced otherwise?

And of course the internet lag time will play a bigger role than any differences between inner and outer tracks. And by the time SAFE is live for a while SSDs will be in more home computers than currently is.


#74

No, my area is planned availability early 2019.

The nbn™ FTTC network in a power outage

• Unlike the old network, the fibre based FTTC network is unlikely to work in the event of a power outage
• Fibre networks require power both at the exchange and in the home to work
• This means all devices connected to the nbn™ network are unlikely to work in a blackout
• If there is no power to the core nbn components in the street or our exchanges, there is nothing you can do
to restore your connection
All nbn™ network core components have battery backup in them – however these have a finite life span

So I guess those built in SBC batteries are indeed worthwhile to keep the vaults running if the NBN has a little juice to keep the connection for a while and if it goes down fully allows for a graceful shutdown of the OS.


#75

I agree with you that video is not the best option but it was the one I found at that moment. Thanks for the link to the Armbian tests. From what I saw in that video I think the test gives a fair comparison where it shows differences between the products and with not too much of significant bottlenecks. Feel free to disagree and please give specific reasons to way you think the test method dont give a fair view of performance.

Either way the test numbers for Rock64 and also Odroid-XU4 seems very competitive and promising, from the Armbian forum tests:

                                   Random IO in IOPS     Sequential IO in MB/sec
                                          4K read/write           1M read/write

ODROID-XU4 (USB3/UAS) 4637/5126 262 / 282
ROCK64 (USB3/UAS) 4619/5972 311 / 297


#76

I’m not disagreeing that you may be right. I’m saying that we don’t know you are right. Even WD Red drives have premature failures. With a 4 disk RAID 10 setup, you would be virtually bulletproof against hardware failures and read speeds would be essentially doubled.

As far as I am aware, both reliability and speed to present a chunk are important factors to being considered a good node that will be promoted, and thus, more likely to earn payments. You are quite correct that this may not outweigh the difference to simply putting up another node, but we won’t know for sure until the network is live.

Someone else mentioned collocation. You could also rent a bunch of cheap VPSs @ ~$3-5/month and send them your own HDDs to be used in them. Many are willing to accommodate that, especially with large quantity orders.


#77

Regardless of the SBC chosen, this would seem relevant

Back on topic (SBC storage and not filesystem performance): only reasonable way to compare different boards is to use same filesystem (ext4 since most robust and not that prone to show different performance depending on kernel version) and performance governor, eliminating all background tasks that could negatively impact performance and having at least an eye on htop.

If you see there one CPU core being utilized at 100% you know that you run in a CPU bottleneck and have to take this into account (either by accepting/believing that a certain SBC is simply too weak to deliver good storage performance since CPU bottlenecked or by starting to improve settings as it’s Armbian’s or my ‘Active Benchmarking’ approach with benchmarks then ending up with optimized settings --> there’s a reason ‘our’ images perform on identical hardware sometimes even twice as fast as other SBC distros that don’t care about what’s relevant)


#78

Interesting info in this thread. Wanted to add that pci-card based SSD’s are much lower latency that USB3 SSD’s – from what I understand. Maybe something to consider.


#79

I have used DietPi, which is also Debian based and very minimal.


#80

How much is the difference in latency, any source to share?


#81

Beware that system-to-disk latency is different from system-to-network latency.
Yes one may affect the other, but far more likely on a local network than across the internet as is the case for SAFEnet.