How do they know what data you have access to?
This isn’t a problem for MaidSafe to fix.
This is just a risk that simply exists in the universe, and there’s really no technological answer to it, besides the fact that SAFE data is completely anonymous, so they won’t know whose account hosts what chunks of data (unless you go telling everybody!).
Can’t they simply start putting people under surveillance to find out?
Will be much harder / impossible on the SAFErnet.
But easy on the current internet
Meh there’s a general election soon in the UK. It’s a bit like telling voters they can leave the EU if you vote for whoever, no one seriously believes that the UK business community would allow it to actually happen. Same for banning encryption. They also can’t ban specific apps and not others under EU rules.
Lets say you have some long lived data that was abandoned by its owner. The network is constantly working to move this around between vaults… for eternity… The owner of that data only paid once to have it stored. At some point doesn’t the cost to the network outweigh the price charged by the network to the user who initially stored this data? Does this problem not grow overtime as an increasing amount of data is abandoned and never deleted by the original owner? Do we end up with a network that has a large percentage of “garbage” abandoned data that we are obligated to pass around between vaults for eternity?
Perhaps, but memory, storage, bandwidth, etc all increase exponentially over of time to combat this issue in a very effective way
my usb stick has a folder on in “old usb stick” - and in it there is a folder “old data” and in that there is another folder with “backup data” … + there is an ubuntu installed on this stick + it is half empty + 3 full backups of my master thesis data
… that doesn’t proof anything … but I’m very positive the amount of data floating around won’t be a huge problem … at least it hasn’t been in the past
I suggest you look up “archive nodes” in relation to this. The behaviour is not finalised, but the issue is to be addressed without ever deleting data but storing it efficiently according to the likelihood of it being needed.
I’m thinking that abandoned data is only a problem due to the fact that it accumulates indefinitely. Your replies convince me that this will not be a problem any time in the near future, ie. decades. I’m still curious if this might become an issue centuries from now. Currently storage does become cheaper exponentially, but our storage demands have also kept up with our storage ability over the years. We can’t expect our storage technologies to improve forever. If global storage capacity on the network ever did level off in the far future when global population also levels off, then as people die, all the data they ever stored in their lifetime gets left permanently on the network. As generations go by, this turns out to be many lifetimes worth of abandoned data on a network that would have to grow in data storage capacity in order to keep all this completely useless,inaccessible abandoned data from all past generations.
Perhaps future generations will implement political solutions to this problem and require people to pass their private keys in their will so their data can be deleted after they pass on? But then such political solutions would compromise the security of the data. Perhaps a secure application can be written to only release keys long after a person’s life expectancy?
wuhuuu - a smart contract not a bad idea
ps: but I personally would prefer everything being erased instead of released
I think its impossible to erase with the current design (or a later change). Data can be made inaccessible, but it can’t be erased (removed from vaults), because there is no way to know that a chunk in a file of yours is not also still needed as part of other active files.
oh - thanks … that makes sense - yes
Thanks happybeing. Do you know if this is still the plan? I’m starting to think this is a major advantage of SAFE vs. other solutions. E.g. you pay to put the data up once and you can rely on it being there, even 10 or 50 or 150 years later vs having to pay per month like one has to on Dropbox and other services. I still think having a force delete might be a good idea, but keeping the data around by default could be a great feature for a lot of use cases where the data might not be accessed frequently, but you want it to exist when looked for regardless of whether you’ve remembered to keep paying for its storage.
Yes it is still the plan
- for immutable data you can effectively delete by “forgetting” the means to access it (removing the reference from your data map). Without that nobody can access it, unlike when a hard disk delete does something similar, but the data can still be recovered.
- for structured data, I think there will be some form of delete, but I don’t know details.