Would it be an SSD killer?

I was running a diagnostic tool, and I realized that I was being extremely reckless with the writes in the drive considering that both drives in my laptop are SSDs.
I was wondering if the vaults would dramatically reduce the life expectancy of SSD drives, and if this should be limited to traditional drives.


This doesn’t answer your question, but since hard drives do need maintenance, it’s coming close.
Best Farming Software (Spinrite)

Actually, there is much doubt about the actual effectiveness of Spinrite, and most forensic/data recovery experts despise it as it is useless if not dangerous.

And there is no SSD maintenance that can extend its life. The more writes, the less life.

I’ve been using HDDs since 10MB (megabytes) was the starting point in a desktop PC. Naturally I’ve had a few fail on me in that time. Maybe 75% of the time I upgraded for size/performance and 25% due to disk faults.

I bought my first SSD (Samsung) six or more years ago and am now on my second SSD (also Samsung) on that same machine. The first is still flawless, but I upgraded to get more storage. I’ve never worried about how I use them, and I’m a fairly heavy user (not 8x365) but more than average home user of this machine and drives.

So I’ve never had an SSD failure and in fact, while I’ve now read lots about how they might fail and how to extend their life, I’ve never heard from anyone who has said their SSD failed. Not saying they don’t, but in six years of paying attention to the technology that surprises me.

Now obviously leaving the drive on 24x7x365 is different, but the question will be how much writing goes on, and how to mitigate if this is deemed an issue.

So my thought is: it may be a problem for a farming rig, especially if using cheap drives, but not necessarily, and something we should examine carefully when we have some real rigs running on the beta perhaps.

1 Like

I bought a couple ocz vertex 2 seven years ago to run in a raid0 array, one failed for some reason some years back but the other one still runs and software analysis gives 0 errors on it… I bought another Kingston 2years back but it already has a couple errors… I was running them for 2 years separate but lately decided to place them in raid0 (hw raid) also. I know, tricky but it runs flawless till now :slight_smile:

1 Like

Did the diagnostic tool identify that the vault was being “extremely reckless” or was it that you realised the diagnostic tool was the problem???

My understanding is that a vault writes the chunk then only reads the chunks thereafter till you reset the vault. So I’d expect a vault to very very little writes. Write once read many and this is what SSDs excel at. Even resetting them once a day you’d get more than 2000 days from it for a cheap drive.

Oh but there is a lot of maintenance that can be done to extend the life, just depends on how you use the SSD to what kind of maintenance will help; Most modern SSDs have one form of maintenance built in and that is to use the least written block as the next one to write to.

One form of “maintenance” that can help is to keep the drives clean of unneeded files. The more spare space then the more writes can be done. For instance if you keep the drive 1/2 full then you double the writes that can be done to the drive compared to a drive that is 75% full because there is double the available blocks to spread the writes over.

For vaults and using a SSD that has chips that can have their blocks written to 2000 times before failure. And assuming only the vault is stored on the drive (applied to USB memory sticks too)

If you used 90% of the drive for the vault(s) and reset them all at once, then you’d be able to fill them up 2222 times. If you used 50% then its 4000 times

But in fact if you reset before the vaults were ever filled then its a lot more times.

The other situation is when you reset vaults at differing times in an attempt to keep particular vaults fresh and maximum earning by resetting vaults whose earnings are very low. The calculation of how many writes then depends on the size of vaults and how often you reset individual ones. But in this case you would generally be running 24x7 since to determine underperforming vaults would require a reasonable time period (many days). Thus the drive would likely become too small before it dies from too many writes.


Vaults are write once read many which is low usage for SSD drives and would easily outperform a Hard Drive which wears on all of reads, writes, and hours powered.

And the above assumes that caching is still to be done in memory.

1 Like

No, I ran the diagnostic tool on my laptop, checking the overall health of my drives. With it I realized that I was stressing it too much by running a full bitcoin node. The life expectancy was reduced from a couple of years to only a month. It was alarming.
So naturally it sparked my worry about what would happen with a vault running 24/7 on an SSD drive.
Even though theoretically it is a write once, churn can and will happen, and afaik it will rewrite it all the space with new data every time we get disconnected.

Ahh OK, yes that is brutal on drives with all the read/write.

So you need to determine how often you expect to be disconnected.

Then we see datachains may be used to allow vaults to continue after a disconnect under certain circumstances

I always understood that a node had to be offline for a certain time before needing to restart (before chains). How much time and how it is determined I am uncertain.

It will take time to fill up vaults, so if you are disconnected & reset multiple times a day then your vaults will only get partly filled. If only disconnected for moments at a time then unlikely that would trigger a rest of the vault.

Keep 1/2 drive space unallocated and 5 disconnects/resets a day (and somehow fill all vaults between resets) would see your drive last more than 800 days for a cheap drive using nand that can only be written 2000 times (a cheap SSD). Actually last longer since most blocks would last longer than the minimum specs.

1 Like

Well, in shitty countries with spotty connectivity… getting disconnected is not the exception but the norm.
Let me tell you that where I am now I randomly get latencies up to 3000ms to google.
And if we go to mobile networks, it is even worse.

What does a bitcoin node do to write so much? Is it not only a block per 10 minutes? Similarly as safe node does write every time it gets a chunk to store.

Every bitcoin full node writes / stores every transaction. A vault only stores the Safecoin it is looking after (or data chunks also of course).

yes, every transaction is stored in the block that is written. Do you mean that every pending transaction is written separately to disk as they come in and then collected and stored in the block once it is found? I would have thought the pending stuff is kept in ram.

I’m only referred to transactions that are written to the blockchain. You were looking for a comparison of like with like, hence I emphasise Bitcoin v Safecoin (and add general storage separately).

Where a blockchain is used to store data also, then you can add that in too and make the comparison with general storage on SAFE.

Either way, each SAFE vault has much less data to write to disk than a bitcoin full node supporting equivalent functionality.

This was what i am trying to clear. Even if the blockchain was written in full twice (once for incoming transactions in mempool, once for the block containing them.) It would not stress an SSD much. Just like writing any data that is stored. And since it is called the mempool, i only think it is written once. So where from comes this massive write load that kills the SSD so fast?

In my understanding it just fills the disk up and then you must upgrade to a larger SSD. No massive write&delete going on.

1 Like