Safenetwork sustainability concerns - Bandwidth has an ongoing cost however Safenetwork is a pay once, benefit forever model


#224

About 4-5 meters I think :wink:

We don’t do timescales, but we are certainly deeply discussing data chains, simulations, debates and detailed design before its implemented. Still happening in great detail. Means the implementation will be much faster, but this is all necessary tough work we are doing right now. Its great we have 2 working options and are focussing on the simplest with the most natural design (easiest, as we copy nature as opposed to create magic numbers, its a thing for me, always).


#225

Cool, sounds great :slight_smile: Is there any plan, approach from Maidsafe side to search for cooperation with big brands from the automotive, health, etc. industries?


#226

I still contend that the simplest and most straight forward method is to have a “temp” class of immutable file storage.

  • The user specifies this type of storage when uploading. Maybe even a 20% reduction in upload costs to encourage it use.
  • It has a limited lifetime
  • If a second upload of the chunk occurs then its marked as permanent
  • It cannot be deleted by anyone solving the issues of ownership and people trying to delete another’s data. For example random attempts of random chunks occasionally getting a hit. Eg a crafted node that takes note of chunks requested and then the attacker uses that list to attempt deletes causing issues.
  • The lifetime is measured in network events and expected to last a minimum time and then is available for deletion when available space drops.
  • Deletion is performed by simply not replicating the lifetime expired "temp"class chunk when the vault is turned off. Thus the network does not need to scan for these expired “temp” chunks and simply doesn’t replicate such a chunk.
  • EDIT: Only allowed for private files. We do not want public files to be broken, even after 50 years. Someone might be doing a history report and need that once used public document that explained the formation of xyz company.

#227

As I understand it, currently if a file is requested many times it is copied across to more vaults to meet demand. Does a user delete event feedback to this popularity calculation? Can you treat a delete request as a negative download request and start removing data from various vaults? Not a lower demand distribution but an actual initial storage usage reduction signal?

I don’t know the file distribution algorithm, would it be possible to use it exactly in reverse so that when the last person places a “delete update” to the data, the data is fully removed?


#228

One concept suggested was that 20 IDs were kept and if more then it was permanent. Those up to 20 IDs could request deletion and when none are left then its deleted. Or was that for SDs? and thus no longer.

The idea of a “temp” class allows for the data to simply not be replicated when its lifetime is expired. Thus when a vault is turned off those expired “temp” chunks are not replicated elsewhere. This provides a graceful method of deletion without any effort by the network. No need to scan vaults for expired chunks or anything.


#229

We have a similar approach, a flag is set when more than one peer stores the same chunk. Until then it is unique, but it does have edge cases.


#230

I suppose the big difference is that I don’t say uniqueness is the criteria but rather the user decides if the chunk is to be stored on a temporary basis. Only if 2 people store the same chunk is the temporary status removed if it was there.

Then a countdown network event count is used to determine when its available for removal.

The reason for this difference is that it will be impossible to determine if a unique chunk is a never viewed cat video that can be deleted or a person’s last will & testament that isn’t accessed for 30 years till its needed to be read. To the network they will be non accessed chunks that are unique without any indication to the chunk’s value.

You just cannot judge this and nor can the network judge this. So isn’t it better to let the uploader decide if the file (chunks) is temporary or not. And give an incentive for uploading temporary files.


#231

I think we need to remain conscious of the desire for full persistence too. Do we really want temporary files floating about, considering they will result in broken links/URLs (404s)?

In short, while we may technically be able to remove data, is it even desirable? Is there a cost beyond the technical when data goes away? IMO, immutable, persistent data is a very powerful concept and it may not be good to dilute this.


#232

My opinion is that we can maintain no deletion. I really do believe this is feasible, especially as the network grows past small.

But if deletion is needed then if we adopted the temp class for say private files then we have a reasonable balance.

For public files then deletion should not be allowed for the very reason you mention. In theory private files that are temporary class should not affect public SAFE (web) pages since by definition they are not meant for public viewing. But I’m sure there will be the edge case where a public page will reference a private image.


#233

I agree with the central point but the design as it stands won’t solve this because websites are served using mutable data structures and those can be edited, so we’ll get broken links … unless we have some automatic archiving (maybe an option) that creates an ongoing record of those structures which can be replayed. I don’t know if that is planned, but it sounds like it might be an application for data chains.


#234

And the very reason so many of us are 100% behind you guys.


#235

Just a philosophical thought on ‘copying nature’ and ‘deletion’: Is deleting stuff following the way of nature / Does nature ever actually ‘delete’ stuff?

I would argue that from a subjective/experiental point of view, which I’d venture to say is the way of nature in general, nothing is ever deleted. Stuff that happened did happen: you thought that thought, you did that deed. You can transform/transmute it for sure but never ever erase it.

Now one could counter-argue that what I just wrote about takes place at a different level than the one SAFE operates at, and express wariness of the human fondness for universal/absolute rules. Or/and, also, that deleting data is really transforming it. OK I’m not sure this goes anywhere so I’ll stop – thanks for your attention anyway!


#236

What do you mean? It is absolutely being used, i can stored my private keys there and private photos of my past memories, and still want to look at them whenever i like! How can you say no new data = network not being used? That’s absurd…


#237

You said there has NEVER BEEN a time when that has happened… Did you not?


#238

Ha got me.

Yea I got tricked up on the wrong number of negatives. Yes I meant that its always been that more people want to buy land then do.

Although seriously its such an obvious typing mistake on my part wasn’t it. But you are right.

Still the point is right that land ownership is a poor example


#239

What if we gave the OPTION to data unloaders to label their data to those data types?


#240

Hmm yeah maybe you get some safecoins back like 10% of originally spent when you delete data, that’s not a so bad idea.


#241

Hmm what if you gave people, when uploading the data, whether they have the option to “retract” their data and get a portion of their safecoins back whenever they want? And that type of data also has a time limit, so lets say, 10 years. And the safecoins you pay to store the data, the portion that can be retracted goes on hold in a separate wallet for 10 years, when you retract it goes back to you, if 10 years expired it goes to the farmer. Another type of data is just like the original safenetwork data, no time limit, not retractable, but cost a little more to store. Would that work better at all?


#242

Do you mean something like


#243

Yes something like that would help, for example! Now we’re finally getting somewhere, and instead of making it cost less why not make it a recurring fee you have to pay to sustain the data? Like 5% cost of the cost to upload but pay that fee monthly recurring. I think many people will prefer that anyway.