I’ve read RFC 54 and 55 a couple of times and also looked over a fair amount of Maidsafe code on Github. I figured I’d share some thoughts / feedback.
It sounds like elders have to get involved every time a user wants to PUT deletable data (“Unpublished ImmutableData”) on the network. That seems to imply:
- Assuming a lot of people want to use this feature (i.e. for backups), the network could require a high ratio of elder nodes to vault nodes.
- The network could have scalability problems with elders having to receive, encrypt, and broker messages for so many chunks.
I’m hoping next week’s RFC will shed some light when it comes out. On a related note, some of what I don’t yet understand is:
- How come unpublished ImmutableData can be unencrypted but published ImmutableData cannot?
- Why should network access (GET operations) be restricted? If the chunk was encrypted by the elder, it can only be read by the owner anyway, right? My assumption is that the elder uses the owner’s public key to encrypt the chunk, e.g. using an operation such as Sodium’s crypto_box_seal().
I had thought the same thing when I read the RFC yesterday and became concerned about the concept of controlling GET access at the network level.
I saw a similar point/question made in RFC54 - adversaries could collude outside of the SAFE network to share chunks stored in their vaults. I feel that part wasn’t fully addressed by @dirvine’s response. On the other hand, the question assumes that the data stored in the vaults is potentially unencrypted and readable by the vaults (the upcoming RFC is supposed to make vaults unable to read their stored unpublished chunks).
The following comment was helpful for me to better understand the motivation behind this RFC:
If all this is largely just to guard against vaults storing risky unencrypted data (I’m guessing illegal content uploaded by adversaries), it seems to me that even a vault could simply encrypt the chunk with the owner’s public key prior to saving to disk (i.e. not require an elder’s resources). The owner would have to undo another layer of encryption once they GET their data back, but I don’t think it should be a blocker (if anything, I think it would be easier and less overhead than developing an Owner-Get messaging protocol).
This could be developed to be a deterministic process (e.g. the owner generates a second keypair and includes the generated “private” key as part of the chunk). Multiple vaults could have the identical chunk saved (and know it’s valid). The owner’s software would be able to calculate the encrypted result stored on a vault’s drive and send a signed hash. This would enable the vault to replicate data to other vaults (along with proof from the owner that the encrypted file is correct). Also, in the future, a proof of storage feature could conceivably make use of this.