So it seems deletion of data is not possible, at least not yet?
I’d say the space it takes is more or less neglectable, but as it has to be duplicated whenever a node containing such stuff has network issues, what comes to my mind is a network that is constantly busy duplicating content it already has, which gets worse the more content is available in the network. Or am I over-estimating that effect?
@eblanshey You’re calling Maidsafe ‘the project’ and you’re also saying ‘The Maidsafe Network’. In my opinion, the network is called the SAFE network and I think that it’s important to start making a very clear distinction between Maidsafe (The company that is building the network) and the SAFE network (what is actually the network).
Definitely should give nice credit whenever one can and where it is due, indeed ‘safe network’ would represent a collective of people, these are the adopters of the technology, yet the network is maidsafe, luckily, and has everyone in mind…
But Maidsafe is the company that is building the SAFE network.
Nick Lambert says:
I have started to go through the same process for ‘safecoin’ and ‘Safe Network’ with limited success. I was unable to overcome objections for ‘safe network’ and this is ultimately refused, although I have passed through the first stage with ‘safecoin’. However, there is a EU community mark for safecoin that is likely to hamper international progress. I may try and get around this by filing for protection on the safecoin logo itself as a backup.
Yepp, I think @happybeing was referring to that there is no way for the network to automatically ‘forget’ data, let’s say if someone doesn’t access a piece of data for a year, should it be deleted off of the network.
Currently, this is not the case. Though an individual person can delete their own data.
This is really old, but I just noticed it. Currently there is no way to remove data from the network, and there isn’t expected to the be in the near future. This command simply creates a new Container (directory) version that has an empty slot for the specified key. But the prior versions of the container (directory) can still be retrieved from the network if desired. Eventually the history “ages” off (we cannot store an infinite history), making it partially invisible, but it can still be retrieved from the network if the address + decryption keys are known. I suspect this will be useful for programmers - you’ll want to hold onto the DataMap of the previous file until the new file has been confirmed as stored. This would allow for an infinite amount of container versions updated since you originally started, but you still have access to the data. So removing files when the directory ages off has problems in knowing whether every participant no longer wishes to use it - which could force the programmer to store the entire contents in memory while updating (depending on their situation).
The other problem with deletion (permanently removing data from the network) is ensuring that no one is using that chunk of data, and verifying this in a way that doesn’t reveal user information. This can be complicated.
Also the container (directory) serialisation itself will not be deleted either when it ages off. Its unlikely that two people have chunks that merged in this situation, but technically it is possible, so I think it must be treated exactly like blobs (files).
The introduction on alternate options for working collective systems and proposed beneficial comparison to current global known/utilized internet and also first trend wave of digital currencies. The monitoring from outside sources has always been at the frontier of compartmentalizing societies, though there has always been a great divide amongst decisions on authorities and control over how, where, when, and in what direction one is capable to utilize, most apparently financially.
If it is indeed true that deletion isn’t possible, could that be a problem?
I assume that in the SAFE network, a user has a certain storage size he can use, depending on what he earned / payed.
And if you remove a file, you get that space back to store something else, or not?
If there are users that very frequently replace their (visible) storage, will there be enough storage?
Or is there an additional cost to replacing often?
Can it be a way of attacking the SAFE network?
I know that part of the solution is deduplication, but evil minds can try to attack with constantly uploading new, unique data.
Wow, this is the first time i read about not being able to delete own data on the network. This is something that should be implemented as fast as possible because it both frees up resources and makes people feel they have control.
No matter how many times you say that a file is private because it is encrypted won’t convince some people.
I for one know i won’t use this for anything truly private if deletion is not possible, because who knows what happens in a decade or two on the front of cryptanalysis etc.
I don’t believe you get anything back. This has been discussed a lot on the forums already. I think it’s really a big grey area that no one’s 100% sure about in terms of sustainability (until we actually try it out).
The idea is that everything is versioned and kept in history. Over time, if the network runs out of storage space, then old data that has been “deleted” using the API call will get replaced. At least that is the way I currently understand it. Someone correct me if I’m wrong. In the future they might also implement auto-garbage collection.
There is always a concern about that, but the I believe David designed the system so that recovery of meaningful data through broken encryption would still be next to impossible due to the self-encryption and dispersal of the chunks throughout untraceable nodes across the globe. You have to have the keys to the data map for that file for broken encryption to be meaningful, and if you have the keys to the data map, who needs the encryption braker.
This is largely true, but some work can be done on directories, which store the chunk information. Ultimately data is encrypted, as a single element, and encrypted with a block cipher (AES). The size of this chunk itself could give away its purpose, and lend itself to a brute forcing attack. Once the serialisation format is finalized, someone should look at padding these chunks to a random amount or the largest chunk size (1MB). Not sure on the best approach (there may be a third better way yet). Something I’ve been meaning to email the team internally, in case they haven’t discussed this before.
I botched this slightly, you could still brute force a chunk in a file, but you would only get part of the file. Depending on the situation this would be useless or helpful.
Another interesting thought I had was to store a random 256-bit value in a file, then use ChaCha over the remainder of the file before giving it to the self-encryptor. Not something I would recommend to do automatically.
In my last post I forgot about the XORing at the end. Thats probably what you were referring to initially. That would make it more difficult if AES was nailed, but could unintentionally leak information about adjacent chunks if the implementation isn’t carefully done.