Advice wanted on node storage filesystem

Thanks for all that :slight_smile:
I just happen to have a 120Gb SSD sitting on a Pi that has not been fired up for months. I’ll attach that to the last spare SATA socket in this box and reformat to ext4.

Exploring btrfs can wait until later.
I’m not running baby-fleming as such any more. I have been running cargo run --example network_split from my local safe_network repo dir
This gives me 30+ nodes across two sections and does some putting and getting to test all is well. Then I can start throwing files at it. So far I can consistently crash it by putting a 2Gb directory of images even with the changes suggested by @joshuef Pre-Dev-Update Thread! Yay! :D - #4896 by joshuef

However if I give it several ~1Gb dirs sequentially it handles it no bother. It does seem to stall if I try to put files which have already been sent to the network thoough, I want to eliminate btrfs from my enquiries before I waste the devs time with more queries though.
So once I get this wee SSD in the box, I’ll run everything again on ext4 and hopefully produce a sensible bug report.

It would be interesting but would it be of much real world value? How many folk other than me and a couple of others are likely to run lots of nodes on one computer post-launch? Once this apparent bug I am seeing is squashed or otherwise explained away, perhaps we should concentrate on performance for one node per box?

3 Likes