Safenetwork sustainability concerns - Bandwidth has an ongoing cost however Safenetwork is a pay once, benefit forever model


Look @foreverjoyful here is the solution to your problem.

You want to change Maidsafe’s mind to implement something that changes some of the fundamentals of SAFE and I doubt they will consider it.

So a solution for you (and Anders)

Both of you can implement your ideas as an APP. The users who want rental and pay for bandwidth as they browse can use your APP to upload and the APP handles the rental. They can use your APP as a layer above the client code to charge the user for the bandwidth they use.

All these payments go to your APP and your APP pays the network for what is uploaded. Then you can actually profit from all the proceeds and if the network needs more coins because no more uploads then your APP can donate back to the network (upload duplicates) and still charge the users who use your APP for the bandwidth.

Then you can prove if people prefer that.

And Anders can do similar to make uploads free.

Hint:Use MD data for the rented file storage and you can then delete if rental not paid on time


Well, maybe SAFE shouldn’t become a BitTorrent on steroids. So if it’s possible to implement in a spam proof way, then PUTs for up to 1 MB can be free and for larger files there is a cost for PUTs.


You put them in the same bag, but just to clear any misunderstanding, they have opposite ideas about the cost of uploading and maintaining files in the safe network:

  • @anders wants these operations totally free,

  • whereas @foreverjoyful wants additional costs for the uploader (compared to what is currently planned by Maidsafe)

I share @foreverjoyful concerns about sustainability of pay once and store forever principle but I find @anders idea not viable in a decentralized network.

They necessarily need to be implemented at the network level and not at the app level:

  • Nobody will use @foreverjoyful’s app when they don’t pay any renting by uploading directly to the network

  • @anders’s app won’t be able to upload files because it won’t have the fund needed for this if users don’t pay anything.


Bam this illustrates your lack of understanding and/or again the misinformation continues.


Exactly, if i were to do it, I’ll probably end up forking the network rather than implementing it as an app. It’s impossible to do it as an app and still be viable as @tfa has said. Although it may not be necessary since the network once launched can still be changed(to an extent) as David has said, if it doesn’t end up working as well.


Yea, it was not the same APP. But two different ones suited for their purposes.

Also it was to create a way for them to view their ideas from a different light. For joyful he might actually come to his senses and realise that people won’t go for rental.

For joyful, he could definitely do his at the APP level or even replace the client those people use which effectively is the client+“joy_APP” so that people can choose (which he supports) if they want pay once or rental and bandwidth payments.

I think it can work. Obvious this needs testing, but the solution offered by either of these two is not viable.

Think of the old mathematical problem stated along the lines of “if a frog jumps half the distance to a wall then repeats, will the frog ever reach the wall” This was posed before calculus and most said it would reach the wall quickly. The frog was a infinitely small frog so the physical issues were made irrelevant. So if SAFE charges enough then it can be a forever payment since ?1/2? or ?2/3? is used in the first 2 years but plenty to pay for rest of time.

If the algorithm is implemented correctly then using the studied effect of data usage (new data is used more and as it ages so does it use greatly reduce (not importance to keep, but access). So in effect we have that old mathematical problem where the safecoins used to upload is (network wide) metered out as the data is accessed. Also the network will never run out of coin so farmers will always be paid something (eventually) and that something is worth more because its harder to get more coin.

Anyhow testing is needed and algorithms can be adjusted later on. But rental and paying to browse is the way to turn people away and so if the network needs more uploads then that rental+bandwidth is only oing to quicken the demise.

Oh did I mention @tfa that I did simulations of the network “economics” (when SD data was around) and saw some very nice effects of the model used by SAFE. So I have some good data to base my belief that it can work and testing will help tune the algos. This was like two years ago I did this. Took days to simulate a few years. Change the parameters and see what happened. It worked very well when the multidimensional parameters were used. Eg Farming rate algo, upload rate, download rate, various exchange rates, people reluctance to upload when costs are high, and the other way and a few more dimensions to the dynamics.


Oh that’s interesting, why didn’t you ever mention that before? Can you release some data regarding the simulation? How was it done? What did you assume about not uploading when cost was high? As in what type of parameters you set for the simulated people to react? And what type of parameters did you set with the download rate or upload rate algorithm?

Did you also do the simulation taken account the safecoin price fluctuations hence the Fiat storage price etc?


Not any more. It is out of date and never tabulated. It was to answer my curiosity.

A “C” program and used a state style of thing where one parameter was how many transactions occurred between states. Obviously the less the longer the simulation took. But this didn’t really change the results unless I set to a silly high number of transactions. I had a object style of thing were I added the different dimensions to the system as I could work out how to model it. I was up to quite a number before other things stopped me continuing

The program was done/written on a machine that I don’t have anymore and done while watching the test matches 2 or 3 years ago. I always planned to update the program to use a more efficient system so I can make it more do smaller # of transactions between reporting states. And I planned to use updated network plans including node aging etc and increase the initial vaults since people have stated they plan to commit more than I thought possible.

The reason I didn’t mention it to you was I didn’t have the evidence and frankly it wasn’t necessary and would not have changed the outcome, but for tfa who knows me better can take it with a pinch of salt.


No worry. As I said earlier in this topic, the problem is that Maidsafe didn’t provide any numerical simulations. The simulations should take into account many parameters, but specially the repartition of downloads depending on age of data.

A very simplified one is to suppose an exponential growth of stored data with the hypothesis that data is only downloaded in the 3 months that follows its upload. This means that data can give rewards only during a quarter and then is a pure burden after this period. I know this is very rough, but I don’t think that this is too pessimistic on average.

Let us call:

  • YGrowth: annual growth

  • QGrowth: quarterly growth = (1 + YGrowth)1/4 - 1

  • Burden: portion of data that rewards nothing to farmers = 1 / (1 + QGrowth)

These formulas programmed in an Excel file give the following results:

YGrowth QGrowth Burden
10% 2,4% 97,6%
20% 4,7% 95,5%
30% 6,8% 93,7%
40% 8,8% 91,9%
50% 10,7% 90,4%
60% 12,5% 88,9%
70% 14,2% 87,6%
80% 15,8% 86,3%
90% 17,4% 85,2%
100% 18,9% 84,1%

The download repartition should be refined, but these numbers are not reassuring for the pay once store forever principle, with a heavy burden even with steady exponential growths.

I leave more precise simulations in Maidsafe hands. In particular, with a more accurate download repartition that gives not only the burden part but also the farming rate for the active part.


I made similar calculations, with similar conclusion, more than two years ago.

About the data recycling, I made a small Excel to calculate the global percentage of garbage data, over the years, based in the rate of network growth and the annual rate of garbage data. According to the two variables the percentage tends to different limits, some examples:

With 50% Net rate Growth
10% annual garbage -> 25% final garbage
20% annual garbage -> 42% final garbage
30% annual garbage -> 56% final garbage
50% annual garbage -> 75% final garbage
80% annual garbage -> 92% final garbage

With 100% Net rate Growth
10% annual garbage -> 18% final garbage
20% annual garbage -> 33% final garbage
30% annual garbage -> 46% final garbage
50% annual garbage -> 66% final garbage
80% annual garbage -> 88% final garbage

But I still do not see any problem in the pay once store forever principle. In a very high percentage, old data will never be accessed, but that is totally indifferent to the farmers because their profit remains unchanged.

Well, the random distribution based on the XOR directions should equal benefits to all farmers. Some may have bad luck one day but, I think, there is a very strong tendency to a fairly uniform distribution.
Of course Maidsafe can do all kinds of simulations although I am somewhat reluctant for two reasons. One because many times I prefer to follow logical reasoning than simulations that may contain structural failures. The second because, in decentralized computing, simulations have the bad tendency to coincide very little with the real world.


Is this growth of data or of farmers?

Also the data is not accessed after 3 month is too gross an overstatement and I’d have a lot of problems even suggesting it as a extreme edge case. The studies are more like in the 1st 6 months the data is accessed 1/2 the amount it will ever be then next 6 months another 30% or so.

The range in the first year is like 40-80% of all accesses depending on the data and that is only valid for around 80-90% of all data. The other 10-20% is like regularly accessed Programs and operating system files that are currently in use were not included for obvious reasons and I would not include them either in any SAFE storage usage.

Also you have not considered even remotely the different pattern of database style of data in MDs. This will be huge and things like search engines will have their own usage patterns and may even have large databases access during searches in a way that never follows the drop off we see with static data.

So maybe if you used a function that shows data usage dropping off at a set rate for first 6 months then another rate for the next then another for 4 years and do that for 80% of the data and the other 20% drops off more linearly over 5 years to say about 75% of all accesses. Data is never considered to be never accessed again.

Also the study said that if data is publicly available then the study is at best an approximation as a lot of media gets resurgence from time to time.

NOTE: also this study did not even consider public data that is dedupllicated. So that one movie might have constant usage over 2 years because the audience is varied and learn of its existance (or desire to watch) over time. An extreme example would be say a Disney style of kids movie where it will be watch every year it exists by an approximate increasing number of kids with the increase due to population increase.

NOTE2: There is also the effect of audience increase. As SAFE users increase over the years so too will a lot of public files be access by these new people and the “Disney” effect for other media will occur to some extent for a decade as people start to use SAFE. So unfortunately we cannot even fully follow that study’s pattern for all data.

NOTE1: and NOTE2: are two of the reasons that a 3 month period of usage then none is an extreme edge case that would not happen for a network in any sort of growth.

Also I didn’t mention that in the study (and I wish I could find it again) different types of data (Movies, vid clips, podcasts, business type documents, backup files, etc) all have different and quite varied dropoffs. In the simulation I did I actually had a calc sheet that I entered all the various parameters over time so I could more accurately simulate the effects with different data sets being added then dying off.


Then it makes no sense for all the garbage to be there occupying space FOREVER, safenetwork is designed so everyone’s spare storage space gets used in a meaningful way, not so that those spare space, otherwise can be used at least sometime to store some useful stuff, now filled with data that will be never accessed yet maintained for eternity to come by the network. That’ll make the network be more inefficient than just having spare hard drives lying around with storage space.


Ummm why don’t you talk about how data is valuable. Even if you don’t find it valuable. Don’t need to read very far or look far in a google search to see that data is the new currency. Not only for advertisers but for everybody is data valuable. Your accounts is valuable, your tax filings are valuable, your video of your parents will be valuable in your later life when the have passed on. But no they arn’t valuable are they. Remember that that garage is growing by 10 times every 5 years and could be greater when SSDs take over next year at upto 5 times a year (40TB SSDs being worked on and expected for release in a year or bit)


Well consider you earn safecoin for devoting resources to the network you aren’t currently using. Then the network charges you in safecoin. And consider that once you upload that photo to the safe network it no longer has to take up space on your phone and can be accessed from anywhere. And consider you’ll ONLY be charged if that photo has never been uploaded by anyone else before. I download a lot of memes from the net. There’s a good chance a meme has been download and uploaded by others. So odds are SOMEONE has a copy of half of my meme collection somewhere. If the SAFE network can successfully identify which ones have been already uploaded that saves me safecoin on uploads. Same with music or random music videos or something. You only get charged for UNIQUE data uploads. Also consider that keeping data on your hard drive ISN’T free because eventually you do run out of space and need to buy a bigger hard drive. This is an especially acute problem for those collecting large video files (kareoke anyone?)…


No the user is charged even if an exact copy is on the network. This has been confirmed that it will be the case, This is another aspect of why the model will work. The more popular a file the more often random people will also upload it. Reasons for it include keeping full anonymity and security plus the benefits to “economics”. This is for you uploading it causing the network to process it even if not finally stored (twice) and the expected downloads from a more popular file.


Okay, my mistake, but then how does deduplification work then?


Yikes, that’s an interesting observation. You mean I take it that only new files are read much and older files become less and less read thereby generating fewer GETs. But couldn’t farmers be rewarded even for cashed data? Cashed data is continuously shuffled around on the network I guess and generates GETs and farming reward pretty evenly distributed among all the farmers.


Yes this won’t work though, someone has to pay. If it’s like this then everyone can just do this and store everything for free. For everyday people the amount of safecoin you farm is likely not going to meet the demand a normal person storing data, unless you store only tiny bits or that a lot of people stores new data on the network.

Why would you upload it and pay if this data has been uploaded already? You just need to access it. In fact there will an app for you to check if what you are about to upload has been uploaded across all the public files. If you post a popular meme or other popular photo then you just need to use the app to find the access link and not spend any safecoins to upload it. With an app like that pretty much all uploaded data would be unique. And if you talk about the case of two private files being exactly the same, that’s extremely unlikely. I think most photos people take are unique, most data they upload are also.


Sorry if this has already been discussed, but I assume that the majority of GETs overall on the network will be from cashed data. And rewarding farmers for those GETs will then ensure that the farmers keep earning safecoins at a steady rate. The cashed data will not be counted as proof of resource though. Instead, the farming reward for a GET is determined by how much non-cashed data the farmer has stored, and it doesn’t matter how old that data is, it still counts as the same resource as newer data.

So when a farmer earns safecoins for a GET for cashed data, that reward will depend on the reputation level and total amount of stored non-cashed data (both old and new, makes no difference) the farmer has. Thereby no need for special archive nodes or something like that for old data.


The PUT cost is to store something forever (or at least the life time of the network). Farmers know this and users know this. There is no surprise, so the cost will necessarily be baked into the farming reward.

You can argue that is will be prohibitively expensive to store something forever, but to claim it is garbage after X months/years/decades/centuries is impossible; the contract between the user and the network is to store the data forever and the network cannot and should not judge the motivations for this.

Perhaps access details have been put into escrow as part of a will in testament. It may not get accessed for decades. However, it’s value is obvious to those associated with it.

The safe network is all about persistent storage. It remains to be seem how the economics will pan out, but it is a key feature and the architecture pivots around this. To reiterate why:

  • anonymity: the network does not know who has access to what data, never mind for how long

  • security/permissions: the network does not know account balances, as it does not know who owns the safecoin. It cannot transfer them elsewhere either, as it cannot sign the transfer of them without the private key.

  • simplicity: the network does not need to track who has access to what and when. If the data exists, access is possible.

  • indirection: data maps just address data on the network. Data is shared by 0-n data maps. The network cannot read or write a data map, as they are encrypted (for private data at least), so cannot know who has access to what, even with access to the data map raw data.

  • account security: private data stored by a user can only be decrypted when a user provides their login credentials. The network doesn’t know these and nor should it for security reasons.

I am sure there are many other reasons too. It fundamentally changes the way the network works, due to its distributed nature. There are layers to separate concerns and limit what the network can do without the user’s permission and foe good reason too.

This is why persistence is so key to the network and why it has been discussed and analysed over and over for feasibility. It wasn’t adopted lightly on a whim and we will not know the results until we get closer to release. I believe the arguments for it working are sound (as I have outlined at length on this thread), but be in no illusion that it will be easy to change, without a lot of effort and compromise.

If you want a secure, distributed, autonomous network, some difficult design decisions and compromises need to be made. I am sure there will be forks that try to reach a different set of goals, but those designed in right now are Maidsafe’s.