Safenetwork sustainability concerns - Bandwidth has an ongoing cost however Safenetwork is a pay once, benefit forever model

That means some people would get 59 days rental instead of 30. Remember you are making every single chunk stored on the network subject to this rental. Every MD subject to this rental. What of the xyz tokens worth hundreds of dollars I sent you but you were unaware till I reminded you. But by the time I reminded you those tokens were deleted because you never paid the rent.

It is such a can of worms, the examples of problems caused by rental is huge. Every APP that stores MDs for you has to be paid rental, what of all the business cases where data is lost because the rental was missed (due to some misconceptions or glitch)

We then need companies to explain it to people, businesses and where they are liable to pay rent or the customer is liable to pay rent.

What of the last will & testament that isn’t discovered till the personal papers are found detailing that the will is now stored on the SAFE network. But oh wait it was 31 days and it was deleted because no one knew rental was required.

2 Likes

You’re assuming it’s a rental model now… It’s not. It’s still pay once for forever data. The rental model is simply an add on to give people additional option. When they store they have to clearly choose it. And If they choose that then they should be able to handle the consequences of it.

Also with the time frame it can be for one year. So 10% for one year isn’t bad. And an extra 10% for first time initial storage cost.

For the user that already is possible, just remove the datamap link from your list (directory) of files. That is an effective way of deleting. If the file is private then no one can access the file ever.

3 Likes

Not continually (emphasis is mine). In my proposal, this is done only when a section is about to run out of space, which should never happen according to your reasoning about exponential growth.

2 Likes

Without the data map, how do you know which chunks to decrypt?

RSA /ECC etc. are probably not secure against quantum computing, but our chunks are, (AES internally + xor of previous chunk hash) quantum resistant. There are a few other things in the mix, but you get the idea.

[edit I should add the session packets etc. where you keep the data maps when encrypted with AES types are quantum resistant] I am looking to use a method of private encryption that is similar to EDDH capability but using quantum resistant protocols all the way, so quantum proof is well within our reach for all data]

16 Likes

I’m assuming you still need a data map, even if the data is small, otherwise multiple people would not be able to reference the same data. I’m sure someone who knows the details better can confirm though.

3 Likes

The SAFE Network’s inspiration from nature is a significant aspect that initially drew my attention to this project years ago. However, the ever-growing nature of the data in the network does seem to be an exception to that, which (to my understanding) runs counter to other patterns we observe in nature. I think this merits consideration, and that we should welcome critical eyes—from both supporters and detractors. (I think even detractors can add positive value to this community and the SAFE project.) As a daily reader and member of this forum for over three years, I ask these questions as a supporter:

Regarding the principles and sustainability of not deleting data: Physicists still debate whether ‘information’ is ever really lost on a fundamental cosmic scale (i.e. are all past states of the universe theoretically derivable from its present state). However, I think the information stored in an autonomous network like SAFE might be more analogous to the information stored in DNA. And, certainly, DNA adds and loses information over time and over generations. One of my main draws to SAFE (and one of my main interests in general) is digital preservation—so I understand the desire for permanently preserved information. Yet, if a central component of biology is its ability to adapt and trim unneeded information over time, shouldn’t we ponder if SAFE (which takes much of its inspiration from biology) is contrary to one of biology’s fundamental characteristics?

(tl;dr: Does the SAFE Network’s unidirectional growth run contrary to biological evolutionary principles?)

Regarding the economic dependence on ever-increasing storage: Many in this community seem to share concerns about the current sustainability of the global economy. Indeed, it seems that many people in general are becoming interested in cryptocurrencies and decentralized projects largely because of their worries about the global economy. Although data storage capacity is increasing quickly now, and has been for decades, we have also had a relatively stable global economy during that time. In the event of a major economic downturn, there would seem to be significant risk that the pace of technological development might also be adversely affected (especially technology like storage, which relies on physical resources and growing economies). If the SAFE Network is seen, at least partly, as a defense against global economic risks, doesn’t it seem risky to make it so dependent on future technological/economic growth (growth that so many in communities like ours seem to doubt)?

(tl;dr: Does the SAFE Network’s economic viability depend on the stability and growth of the global economy?)

The success of the SAFE Network is very important to me, as I believe it is to most members here. Yet I do have to admit that these are two concerns I’ve had for awhile. Naturally, I hope that my worries are either based on misunderstandings, or that these issues will be solvable.

7 Likes

Really hope SAFE bypasses ISPs. No reason in the future we have to pay a toll road to communicate.

Especially one that lobbys against free speech and spies on us and censors (non netrality) and steals our attention by promoting interruptions, and loses our data and make us pay to see ads that it is already profiting from through arbitrary caps and which insists that it must have arbitrary profits for its local monopolies but has incentives through its arbitrary ‘premiums’ that disincentivize reaching certain adequate thresholds and which tries to suppress the competition of competitors which must use public internet to reach customers in its region and which holds internet hostage by trying to force higher price bundles on people composed of redundant obsolete products.

SAFE should enable cord cutting. Interference mesh, line of site optical, LIFO all enable this. Maybe someday the main net will consist of just hand sets.

Step by step.

SAFE will be the network protocol. Others will have to use that and provide bandwidth outside of ISPs and have a model to sustain that. One day in the future we will have interfaces which are a set of entangled end points (maybe 100, maybe 1000) with the other ends being points across the globe. Now that is a distributed network.

3 Likes

Entangled points. Recent Advances in Post-Quantum Physics | Cosmos and History: The Journal of Natural and Social Philosophy

1 Like

This doesn’t exactly solve the problem… cos why would people bother deleting it, they may forget… UNLESS of course, you attach a refund when they delete their data proportional to the amount of time the data is on the network. Then it’ll kinda solve part of the concern and make the network potentially grow a bit faster.

Wow i somehow missed this! This is an interesting idea… although, why would you want to be reuploading the data? The upload cost is a lot more than just simply storing the data, hence saving more bandwidth and unnecessary upload times

But that the sections can erase data can lead to a single section, which for any reason are in trouble (even pure chance), generate a huge damage to a lot of files because a single chunk has been erased.

In fact, it could be an attack vector that allows someone, who, even without getting control of the consensus, can have the power to make critical data disappear.

2 Likes

I see this as the deal breaker for this “solution”. I am sure the network will have algorithms to deal with a section becoming “full”

To be able to sell a secure network it also has to include data security and not allow accidental (deliberate in this “solution”) deletion of any chunk.

The temporary chunk idea is a far better “solution” in that any temp chunks that have expired their event “time” limit can simple be removed.

But still I think if the network has any sort of adoption then this will never be an issue (needing to delete immutable data)

3 Likes

It’s only for simplification purpose. A re-payment protocol could be implemented in the long term, but in a short term, simply reuploading chunks will have the same results. I will also add that the payment is supposed to cover the cost of:

  • Bandwidth to upload chunks (when files are uploaded)

  • Storage of data

  • Bandwidth to download chunks (when files are read)

The first part is only a fraction of the total cost and there is no proof it is the main one. I say this because there aren’t any network economy simulations provided by Maidsafe, which is a major problem and is the reason why this topic is so long and doesn’t go anywhere.

It’s just that this case (a section running out of space) must be managed by deleting older chunks rather than losing random ones. But no worry for you, you’re among those you advocate this case will never happen.

Anyway, deleting a chunk should be a last resort action. My initial proposal (Sacrificial data vs non-permanent data) was a complement to a first level security brought by sacrificial data. This mechanism has been suppressed but must be replaced by something else.

In this situation, merge section, eliminating the most problematic nodes, seems to me a much better solution. Even in case of massive lack of space, a restart of the network seems less problematic.

My opinion is that I do not think it happens, but if it happens, the network is doomed and erase some chunks will not save it.

Agree.

3 Likes

Have you ever thought what happens when the network shrinks in space? Honestly. Your assumptions so far has been the safenetwork will expand in network capacity forever.

But what if it doesn’t? Farmers slowly quit after it hits its peak of storage and farmers. Farmers slowly quit as there are much more GET requests which cost bandwidth than PUT requests to store data on the network. As their profitability decreases as well as maybe their ISP starts to shut them down or charge extra cos they used too much bandwidth due to fair go policies even if they have an unlimited bandwidth plan?(who knows what could go wrong resulting in network capacity decreasing) What would happen? So storage cost increases to increase farmers pay and hence attract more farmers? But what if that at the same time gets people more reluctant to store data on the network because it cost higher? This is another reason why I proposed the option to add another option when storing.

Basically, AS IS, the network can NOT sustain if in the circumstances that people stored a lot of data and stopped storing new data instead just constantly accessed their existing data. Dropbox can, all other cloud services can because they charge monthly recurring fee and limit bandwidth. Please don’t overlook this issue. As this can be more serious than you think. Once farmers start dropping out you’ll have redundancy issues as network gets full. Everyone’s data could be at risk. And they can’t even help it! They may not have new(or much new) data to store. They just want to have access to their existing data. They want to pay for their existing data but cannot.

Of course, they can reupload their data but it won’t even help as the same data will be detected and just have the same copies. So they have to change their data by a little bit then zip it and reupload it to ensure their data is safe. This is way too much work for the average person than just having a second option to have a rental model and know their data is definitely safe. You can have the code so something like, when network gets full and it comes to it, for the people paying rent, the network priorities to maintain their 8 copies of data instead of those who paid one time. This isn’t really a bad option. And this shouldn’t happen anyway according to you as the network keeps expanding. So there will be no difference to the people that paid once to store forever over those that chooses to pay small amounts in increments

Yes. . . . . . . . . .

EDIT: I have a little time to answer

You have described the doomsday scenario that I and others have thought about in the past and aluded to in the posts above and why some of it is unlikely

You describe that ISPs will throttle back unlimited bandwidth.

  1. if they did this then obvious there will be less farmers, but the profits for them under the current model will be higher so they can afford to add more vaults and others who initially could not because of costs now can because the income is higher. Its dynamic and dancing to boot :stuck_out_tongue_closed_eyes::rofl:
  2. AU has already been through the cycle of ISPs being unlimited then p2p abuses this unlimited to restricting quotas to now again providing REAL unlimited. The USA has to catch up since they are about 5 to 10 behind in this particular cycle. Europe is already past AU and even provide 1Gbit/sec unlimited. I even gave you prrof that AU is increasing quotas and reducing costs and this is after bittorrent strained the ISPs previously on unlimited. The ISPs have now met the demand and providing unlimited again. Our USA links may slow at peak but that is a common problem world wide.
  3. Rental and charging for bandwidth and storage to the viewers/browsers/downloades is only going to hasten the decline in use of SAFE which will result in uploads drying up and then farmers leaving once rewards become so scarce that even the higher price of SAFEcoin due to scarcity is not enough. Maybe in a decade.

You make rental and Bandwidth charges against the browsers and downloaders then they will simply reduce and eventually stop using SAFE which precedes the death of SAFE. Currently anyone can browse SAFE without even an account let alone paying anything. As soon as you charge the viewer/browser/downloader for bandwidth then they have to have an account and sign in just to use SAFE for browsing/viewing/downloading. Surely your psych training tells you that that is a barrier too far for the casual reader/browser/viewer/downloader. Once they stop then companies move their websites to a new platform or just back to the traditional internet and SAFE slowly dies. Even if it was flourishing, once you charge to view then the doors may hit the last people to use SAFE in the backside. Don’t believe me then do a survey of 100,000 people in 100 nations and see for yourself. But surely any seasoned interneteer will have seen that model come and go quickly. Microsoft tried it rather shamelessly in the mid 90’s and failed spectacularly because people bypassed them and used another system.

You CANNOT charge people for viewing, it doesn’t work, has failed in the past and will fail now and fail in the foreseeable future. To charge for bandwidth usage when viewing is foolishness, just ask microsoft how it went when they tried to charge to view the internet.

10 year anniversary for this presentation, I dont think the message has changed much if at all.

4 Likes