SAFE Storage economics - one-time fee, forever service

I am having some doubts after reading that a one-time purchase would grant a user a fixed amount of storage for eternity. I fail to see how this would make economic sense: providing storage incurs an ongoing cost (hardware, electricity…), which cannot possibly be covered by a one-time fee of any reasonable amount (short of a sum large enough that it could be invested, and the proceeds pay for the ongoing service).

I understand that storage is and will keep getting cheaper over time, but in order to compensate for the lack of recurring revenue, the cost would need to converge toward zero (e.g. halve every year) at a consistent rate, forever. Nobody can predict that this will be the case.

I believe that to be viable, SAFE storage must move from a purchase model to a rental model. Otherwise, the cumulative weight of storage that has been purchased over time will eventually offset new revenues, and SAFE storage providers will abandon the network. Or the cost of new storage will become prohibitive (in order to pay for the existing one), and new users will balk and abandon the network (or switch to a newer, younger network that does not have the same liabilities).

This rental model, by the way, would also be the only way to keep some degree of efficiency in the network, letting old data that is no longer needed or wanted simply expire.

Any thoughts?

3 Likes

It is not rewritable storage, so when the person wants to store more they have to pay for the PUTs

… FTFY

1 Like

thanks Neo!

Do SAFE users get money back if they release storage space? Or, in other words, is it cheaper for them to replace existing data with new data, or does it cost the same as it would storing the new data side-by-side with the old?

No refunds. Data is never deleted and there is no way to free up space for data that people no longer want stored. They can “give up” access to it (a bit like the “delete” on a hard disk - the data still remains, but the entry for where it is on the disk/safenetwork gets wiped), but it won’t actually free up space. The chunks remain.

Thank you happybeing!

then, my initial question and concern are still valid. How is the network going to pay for the ongoing cost of maintaining all this data? The conomics of this look a lot like a pyramid scheme, or a Detroit car manufacturer’s pension plan…

Don’t get me wrong, I love the MaidSafe idea, and I am an investor. I do want this thing to succeed. I just see this one point as a potential fatal flaw, that seems relatively easy to fix.

1 Like

These costs reduce exponentially, so the cost curve is an asymptote. In other words, there’s a max cost to storing a chunk “infinitely”.

I understand your concern and I admit that I can’t provide you with proof that it isn’t a real problem. My confidence has been gleaned over 18 months of discussion on the forum and a good deal of reading before that, but I could be wrong certainly. This is stuff that has never been done before so it is a gamble - but on the other hand, everything is changing right now, so even the status quo is IMO a gamble. Anyway…

The main reasons though I can summarise, and it might point you to further research as these topics have been discussed a lot on the forum:

  • storage technology has continued to advance at an exponential (or similar) rate for decades and we can expect that to continue over the longer term, though perhaps with pauses and bursts of course.
  • de-duplication means that we expect the network to end up with a surplus of payments for people who are ultimately storing the same data. For example, imagine backing up your whole system: all those operating system files from Windows and Linux and Mac users - many terrabytes from millions of users being stored just once (well four to six times per file in practice). Now all those those CDs and DVDs we purchased separately, or films people downloaded multiple times. Everyone pays for their storage, but the costs to the network flatten out once the first copy is stored.

Hope that helps. :slightly_smiling:

2 Likes

wasn’t the idea, just to let the first uploader pay for data, that gets deduplicated?

That was floated and I think @dirvine liked the idea, but it isn’t my understanding that this is how it will be implemented. I’m not certain though.

This thread might also be helpful.

No, that was rejected because it then allowed people to gain knowledge of what others have uploaded. While that knowledge is small it can still be damaging. Every bit of knowledge you allow to be gained from “meta” information can build profiles etc.

Thank you, dyamanaka,

I read through the thread, and it did not change my mind one bit. In fact, many posters who share my opinion on this topic came up with additional arguments that I had not yet fully considered. Anyway, if the cost to the user does not reflect the true cost to the network, then the network is doomed. It’s like price controls in old USSR: you can pretend that a stick of butter only costs 10 cents, but if it really costs $1 to produce, then there simply won’t be any butter for sale, and people will be spending their life standing in line in front of empty stores.

It’s a real shame that such a good concept be doomed in this way. It will take a fork, or a rewrite (maybe using something like Ethereum or Tendermint as a management and payment platform) to get it right…

1 Like

This is not entirely true and the beauty of the system. Some do not consider all the dynamics at once. I have done some pre-lim simulations many months ago that show that unlike fiat the system balances itself.

For instance one dynamic is that as coins are given for farming, they will find their way into 2 piles. One is “keep it for later” (hoarding) and the other is spending it on puts in the near future (even if it changes hands at an exchange).

Another is that not all puts ever result in more than one get. It could be shown that a good proportion of current immutable storage is backups, that is never accessed. Or private files rarely accessed. movies that are accessed often will end up having most accesses satisfied by caching (no vault gets). And who watches old movies, like Leslie Nelson’s “Forbidden Planet”, or any other old movie? Very few are really.

Another is dedup. That new popular video may get uploaded 100’s or 1000’s of times but only one store actually occurs, so that popular vid really made the network 100’s or 1000’s times the coins it would normally take to store that amount of data.

Another is that storage cost is halving every 18 months (actually 10 times in 5 years) for the last 3 decades. And solid state storage is about to make that doubling every year or so. And typically data becomes less used over time. So we have a reducing cost to pay out of each chunk stored.

Another is that the buffer the coin supply provides to allow time for coins farmed to be spent and for hoarded coins to be eventually used.

Another is that as issued coins increase so the coin issuance success rate reduces on farming attempts.

As farming attempts success rate drop so will the fiat value of the coin rise. So then does the value of the farmed coins. It is expected that this will at least make up for the reduced coin issuance but experience shows that typically the fiat value will rise faster than the farming scarcity does.

And a few more dynamics are there too.

As you can see it is a very dynamic economic system and a number of balancing effects occur that cannot/does not exist in that USSR example. Which BTW did not occur as simple as that. It still only cost the same to make as sold since everything else was keep the same, and not the real reason their economics failed.

Obviously the algorithms for putcost and farming rate have to reasonable, they don’t have to be perfect

3 Likes

do you by any chance know, what is going to happen if users send data to each other (as shown in the very early “lifestuff”-demo-video)?

Also lets not forget SAFE coin value :slight_smile:

If you want to send a video to a friend then you send the datamap (or link to the datamap) of the video. Cannot copy a file to another person’s account, the copy is the datamap, the chunks are not put again.

Messages use the SD objects for small messages and datamap to large data/messages. SDs cost nothing to rewrite.

Also I believe that archive nodes need to be figured out before we can claim long-term viability of the Network…

2 Likes

so, just to clarify this, practically giving someone access to a file won’t cost additional safecoins? neither the sender, nor the receiver.

1 Like