RFC 55 - Unpublished ImmutableData


From RFC:

There SHALL only be one time update of owner(s), which happens during the creation, and this type is non-transferable.


What about?:


My point was, I see no benefit to restricting access to files at the network level. I don’t think it’s a good enough solution. A good solution is an elder or vault adding a layer of encryption to the file. My argument is that encryption makes these proposed network access restrictions irrelevant, while also solving the problem of vault owners being held liable for naughty data uploaded by adversaries.

(Added “[upcoming]” to the quote for context.)

1 Like

I agree, but there is a subtlety here. The access restriction for unpublished data just makes it impossible to share that data publicly in SAFE. This is important for the network fundamental where no data published can ever be removed.


Ah, I can appreciate that motivation. I hadn’t thought of it much.

Still, I think an encrypted chunk can be considered unpublished without restricting GET access. Anyone who wants to decrypt the file must have the private key of the file’s owner.

I don’t think an owner could effectively make Unpublished ImmutableData available to the public. Perhaps to a private circle of friends or associates, but I think that’s about it. The power to read is also the power to delete. If the owner posted their key on a public page, all it would take is one troll to sign a DELETE request and send it to the network.


Yes, the problem for the network is that it does not really know what is encrypted, you can alter encryption entropy etc. to get around any checks.


Agreed. I believe the only way for a vault to be sure is to add an encryption layer themself. Or take an elder’s word for it that they encrypted the file. I’m looking forward to reading the next RFC when it comes out.


But you can share the private key to allow others to read it.

It would have been good to have a method that allowed multiple owners for a collaborative project that can have members leave or join without those leaving still having access once removed.

At this stage to do collaborative work the files have to be published data or sharing private ID keys. Either way anyone leaving the group will retain access to the data. This is a problem

1 Like

Possibly, the idea here is if you had access you always have, but any updates are not available. It is much simpler if we do that and possibly even safer as it is likely true that access once means access forever.


Ah yes, had not thought of that. I agree that it doesn’t make much difference then since they could have copied all those files.

But because the person leaving had the keys (they were able to access the files) then they can access future updates unless you (group) change keys when someone leaves. I guess they would but it is a bit of mucking around and hopefully then APPs are made to make this easy.


100% this is our current approach. We make it eay to have new keys, but we don’t try and re-encrypt older stuff. It seems wrong when you first hear it but I do think it is more honest.


If you introduce delete for unpublished data then the APP could allow the group to delete all the files and reupoload the current files with new key when someone leaves. Yea I know expensive way and would it really be necessary?


(side note - Maybe a better name would be ‘private and public immutable’ - because it is published… Just in a private manner…

At least to me unpublished data sounds like local data somehow… )


Yes we looked at that, but it is not really private as the Elders can see it. It just cannot be published on SAFE though. We would prefer private to mean only those you choose can see (read) it, if that makes sense? So if you encrypt it and make it unpublishable then it is private, otherwise it is published (anyone can get it). So private is more like encrypted, this is a confusion we need to resolve. We define unpublishable as able to delete/edit but publishable meaning it can never be edited or deleted.


Protected then maybe?


Just to be clear the default is to self encrypt all data including unpublished data, isn’t it?


Yes the default API is to self-encrypt all data. The metadata (mutable and append-able) is not self-encrypted though. What we can do there is limit the capability (to arrays) and also use the unpublishable for MD as it mutates. Append-able can be either published or not though.


Our conversation earlier left me with the impression, per the Fundamentals, that everything is encrypted by default unless you actively bypass something.

I think we need to be clear if that’s the case or not! Can somebody explain exactly what if anything is not encrypted by default?

Now I’m confused again and I don’t know if what I’ve been telling people is correct.


Data (files) is encrypted by default, it goes through self-encrypt, the metadata there is a data map, that is not encrypted. But if you have a directory then the data from that dir are encrypted into the directory data (as we treat that as a file). You can by pass the high level API and call store on an immutable data chunk though, so you can bypass, default is encrypted.

Now metadata, this is append/mutatable data etc. These are not encrypted by default (although apps can enforce that) and should not be data (i.e. don’t try and store files in append-able data etc) so you can consider these metadata and will generally contain pointers to “stuff”. So these are not encrypted by default as by default they are to be readable by consumers, if that makes sense.

The balance is important, data (files) are default encrypted, but some may try and bypass this to direct store an immutablledata chunk (that is bad), some may also try and store files (data) in append-able data or mutable data chunks and that is just another bypass. That is kinda like folk storing images in a blockchain, you can but should not. This is why we need to ensure the metadata types (append and mutable) are just pointers of actual data.

Probably us trying to be as open as possible, data is encrypted by default, but there are currently some ways bad apps could try and bypass that. Later we can possibly make it so that vaults do some more magic to refuse unencrypted data, remove the possibility to bypass the high level API or something else, but again data is encrypted by default it is critical this is the case. Bad apps will appear though and if they can bypass the API then they might try and bypass the default mechanism. I would hope such apps are seen as bad at least as a first line of defence.

EDIT Think of it this way, all data is encrypted, but the network will have pointers to data, the pointers are not encrypted, they are pointers. the thing they point to is data and that data is by default encrypted. All we are saying here @happybeing is that if we consider any bytes on the network as data, we do not encrypt it all, if we consider files as data, well then we do encrypt it all by default. So 100% of files encrypted maybe is maybe a better statement. Anyway, let us know what you think.


Of course :wink:

I’m a reasonably technical chap and have stored files in MD but am still unclear and confused by this. I can trust that it’s sensible, but don’t yet properly understand the implications of certain metadata not being encrypted. I think it is potentially in conflict with the ‘everything is encrypted by default’ statement, for anyone who doesn’t fully understand what is not encrypted by default!

Again, I think we need to be clear about this. I think you’ve explained the basis, but it isn’t clear without then digging into the technical details.

Let me say what I think/hope this means without actually digging in:

  • the datamap is not encrypted means that for any immutable data, there is a map of pointers to chunks, which are the self encrypted data of the file. Or, in the case of small files (<3Kb?) the datamap itself will contain the encrypted data of the file. No other metadata is included in the datamap (so no filenames, dates etc).

I’m not sure the last sentence is true, so please correct if not.

  • when using the APIs with private containers (e.g. _publicNames), the keys and their entries are by convention encrypted, and an app can apply this convention to MDs in order to keep their entries private - but encryption of MD entries and values is not done by default, it is an explicit action by the app. So the Web Hosting Manager and anything creating or updating a private container will decrypt/encrypt MD entries explicitly (i.e. keys and/or values).

  • when using the APIs with public containers (e.g. _public), including MDs using the NFS emulation, the keys and values of the MD are not encrypted by default. This means that anyone who has access to such an MD will be able to read the keys and their values. If the value points to an Immutable Data file though, this will be encrypted by default. This means that someone can read the entire file and directory structure of a SAFE NFS style file system if they have the address of the root MD, even if they don’t have permission to read the files. That’s contentious if so, so I’m stating it in case it is wrong and needs to be corrected.

I think explanations on this level will give clarity for users and developers to understand this area, so it may be worth fleshing the above out into a reference as well as correcting anything I’ve misrepresented, and adding what I’ve missed.


Yes, this is correct.

Yes, we do need to improve. What can cause some confusion is caused by the network library structure. So we can boil it down to Vaults on one side and safe client libs on the other. Now the latter spreads upwards to many libraries/apps/APIs etc. as well know.

So safe client libs can enforce a certain API and this is all clear. We sometimes look at this part to think, what does the API enforce and in many ways it is valuable to do that.

However, the lowest level API on the network is the Vaults and what RPCs they will accept. What I mean is people could just ignore safe client libs altogether and create an app to talk directly to vaults (no RDF etc. etc.). This is what I am focussed on a lot and it does confuse (I accept that). I think what you have said is 100% on target, but we need to get to the point where those vault RPCs can be much more strict in the data they allow to be stored etc. That would certainly be a bad app, but it could be done and even though normal users would not use that app, bad folk would as a target on the network.

I think as we launch we would see much more of safe client libs ending up in vault code. By that I mean the things we wish to enforce will not happen in safe client libs but in vaults. Safe client libs then or any alternative would be forced to use the API how it is intended. This part will be really nice to look at, especially as we have so many folks to look at it. Internally I am pushing everyone very hard to get all features out, initially, then working through much of this, plus using secure enclaves for some Elder code to prevent some other attacks and a few other bits, but when we have the network out and running all of this is so much simpler to attack.

sorry for large text blob, but just a wee dump of thoughts around this subject to give more info to folk, probably to confuse a wider audience as well though :slight_smile: I am not sure though, in any case, it should be useful info.