Web Apps and access control

@ustulation Aside: for testing SAFE Drive, is it currently (alpha 2) possible to access any private containers? I’ve tried the standard names but they didn’t work so I’ve assumed they are not yet created in the account.

3 Likes

You need to leak the datamap to leak the file. DataMap is just a structure with records. You can further put it as an unencrypted ImmutatbleData into the the network and put the name of the ImmutData into a private container thinking you are safe, but as you can guess you aren’t. Sure no one can read you private container to get the name of the ImmutData representing your data map. But anyone by chance can come across that datamap anyway and then use it to read the file.

So if storing datamap inline in a private container, you don’t need to encrypt the datamap separately as it will be encrypted along with other stuff in that private container as usual. If storing only the name of the datamap in a Container, make sure that the actual datamap is encrypted with something.

A bad app needs to either make the keys of the private container storing the datamap known or needs to make the keys used to encrypt the data map known to leak the files. All in all, protect data map to protect files.

2 Likes

Thanks.

This is the issue I’m trying to establish. So it is possible for a malicious app, once granted access to a private container to leak the files through a low b/w channel, because it can obtain the datamap decryption key, as well as the datamap address? Ie it doesn’t have to copy even the datamap, just a couple of strings.

Just to be absolutely clear! :slight_smile:

1 Like

Just like my idea of a malicious APP that is also a great image file viewer/manipulator. Tricking people to use that APP means the APP gets access to those images, private or not. And then it leaks the datamaps by some method.

1 Like

Yes it can, if it can read the datamap becuase it could read the parent container, that’s game over. If you want to share the read access to the parent contaier but not say an inline datamap there (which is just one of the contents of that container) then further encrypt the datamap before putting it into the container and put the keys elsewhere. Just a matter of mix and match really. Since currently MutableData is just a map of key-value, you can have different encryption schemes for different records and share only the ones that you need to. That way a container is not uniformly encrtypted with just one key but every record is encrypted with different keys. This for e.g. is the strategy we use for access-container in safe-client-libs for authenticator-app interaction (where an app can only read the record pertaining to it although same access-container is shared among all the apps by the authenticator)

6 Likes

The posts above gives also an answer to a question I asked a while back:

But for this security to work, you have to check really well which access you give to which app in the Safe Browser:
image

Also in Safe Browser for the Safesites → ‘WebIdManager Permissions’ I only see ‘_public(Names)’ and ‘App’s own containers’ for Safe sites available now.

1 Like

@ustulation did you miss this?

2 Likes

Ah sorry - i think an app can ask the authenticator for any private dir access and if granted by the user it’ll have it. The containers are created right when you create your account and if it fails mid way due to some reason then the next time you log in the operation-recovery to continue where it left off and complete it.

@nbaksalyar/@Krishna could maybe give better details I suppose

4 Likes

I’ve just tested the default private container ‘_documents’ successfully: compare safe://to-do and safe://docs.to-do.
I’ve changed the 2 ‘_public’ instances by ‘_documents’, the appInfo id and name compared to here. I’ve no doubt the other private default containers will work the same way, but here on hub.safedev.org I see container ‘_pictures’ mentioned, which I don’t find in the source code of safe_client_libs. That is something to check @dgeddes: remove _pictures from the website or add to the code?

5 Likes

Please make sure this is not what’s happening to you: Container Access - any recent DOM API changes? - #5 by bochaco - Support - Safe Dev Forum. We just need to still implement the different type of pop-ups as we used to have in our Beaker version.

As per the _pictures, I think @draw is right, @nbaksalyar?

Edit: just reported it here to follow up @draw : https://github.com/maidsafe/safe_client_libs/issues/680

4 Likes

Thanks @draw - thine will be done!
David

3 Likes

If I’m following this right, it seems quite an interesting issue.

One of the reasons given for keeping data forever on the network is that it formalises the fact that on the clearnet someone can keep it forever once it’s out in the open anyway. Similarly, it seems that once we give an app access to our private data we only have it on trust (unless we study the code) that the app won’t siphon that data off. Thus the advance of SAFE is essentially that the default for an app is to not need to own anyone else’s data. Is that right?

It strikes me also as slightly similar to trying prevent the spread of digital recordings. Spotify (for example) can code it so that you can download a song but only play it through their player, but there is no way they can stop you taking an audio output and re-recording it.

I suppose the benefit of data is that it doesn’t necessarily need to be consumed in human readable form like the music example above, which perhaps gives the possibility of some kind of more secure ‘blind’ interface in the future, such as homomorphic encryption.

Not sure if that reading of the issue is right, but just thought I’d post to try and clarify the issue for myself as much as anything, and maybe keep the discussion going on!

4 Likes

The Enigma project enables ‘secret contracts’, which allows smart contracts to interact with user data without actually knowing what the data is. I think/hope that some day SAFE will end up adding this sort of capability, mashed together with SOLID in some way.

1 Like

Disclaimer: I am a fan of the general direction of the Android permission model, not trusting local apps by default, etc.

I can see where you want to go with this idea, but I can’t really imagine how it could be implemented. The app itself is a black box, once it has access to, say, your personal photo library, it can send the files directly, or process (in the last example, encrypt) them and then attempt to send that.

What the parent process can possibly see is “App XY is trying to access photos” and “App XY is trying to send this blurb of data to this address”. We currently can’t reliably analyze what the app does, or what the blurb of data it sends contains.

Thus, the only thing that can really be implemented is a general permission to send some data to another place. However, in this case user fatigue and decreasing attention to permission popups would be even worse than it would be if the hurdle were at the point of reading data since the vast majority of apps require to send data somewhere in order to function. But with a little common sense, even the average user can figure out that no, the note taking app really doesn’t need to access their contacts.

To make such a permission model feasible, we’d need much more knowledge and control about what an app does with the data it has access to. Functionality could be implemented that is able to “track” what data an app accesses and what it does with it. Sort of like a flag that sticks with the data while it is being modified(while not undergoing any modifications itself), and stays visible so we can detect it in outgoing traffic. To be able to do that, we need to be in control of all frameworks, APIs etc. the apps have access to - basically the entire ecosystem. Because if the app manages to get the data into any framework that isn’t prepared for these flags and effectively removes them, it all falls to pieces.

Then there’s also the fact that apps could try to maliciously cause user fatigue. Like, when a lot of popular apps start requesting permission to send lots of things (they might not need) very often, and possibly deny all funcitonality if not all permissions are granted, a lot of users would opt for functionality of the app over their privacy. Eventually, they’d just click “OK” on every window that pops up again.

Going by a “permission to access” model, at least denial of functionality could be prevented by simply feeding apps empty lists/fake data when they get something denied. Then the app wouldn’t be able to tell whether what it sees is real or fake data and couldn’t force the user to grant any permissions.

2 Likes

(I realise you might know this already, but others might not)

Only if it has permissions to write. And hopefully we can even quantify what sort of things it can write to. For example only write to objects you still own and doubly encrypted with your public key.

The APP runs on your device so the APP writer can only get access if the APP writes something they can access and read.

The problem you mention of course is possible if its reasonable for the APP to write data not as you the owner. But in many cases this is not needed by the APP, so some education is definitely needed so that people can be more aware if the APP is asking to write 3rd party data not owned by you

Very true and been on my mind too that it could end up a constant stream of answering permissions

It is going to be extremely important that from the start we have APP writers who will be writing APPs that do not ask for any permissions they don’t need (unlike android apps). So that there is always a good alternative to the personally invasive ones. One of the reasons for android apps to ask for so many unneeded permissions is for advertising revenue and data selling in order to earn from their app. At least with safe there will be APP developer rewards and those APPs that are good they won’t need that advertising or data selling revenue and hopefully those permissions either. The bad apps might continue to try and earn the android way, but with some educations and good app stores people should have a clear and obvious choice.

Maybe we could create a sandbox for these APPs and while they think they have all the permissions they asked for they really are running in a mock network and only have whatever access to the real network that we want them to have. So give them all permissions but maybe only allow say reading of your data.

Also this sandbox would be a good way to evaluate what exactly they are trying to reveal to the world.

4 Likes

Isn’t the problem we are discussing more what the app writes/where we can intercept in a meaningful way and less where it writes to? Since the only scenario that is a problem for us is when the app tries to write data somewhere more people than just you have access to.

The problem with this it’s very hard to mock something the app developer is in control of. If the app expects a cryptographically signed response of some sort, there’s no way to mock that.

Sandbox in general - yes. IMO anything that isn’t a critical system component should run in a sandbox. But measuring what is revealed isn’t possible without something similar to the aforementioned flags - otherwise, the app can encrypt the data it is trying to send and we have way to tell anymore what the blurb of data contains.

1 Like

You seem to have answered it for yourself. If you can control where it writes to and if encryption by your public key is used then you do control who can read the data.

Also for instance if the APP writes a private file for you then without giving the datamap to a 3rd party the file is private to you.

So if you can control where and how it writes data then there is potential to prevent it leaking data. So if say we have a scenario where there are permissions to allow it to write a private file but no other write permissions then by allowing private file write (including filename write to your personal encrypted container) and no other writes then it cannot leak the datamap. It cannot write the datamap address to MDs - no permission, send a message - no permission. This would be valid for word processors, image processors, etc.

That is why it runs in a sandbox, the sandbox controls any external access, the APP does not know. Yes I realise that by some tricky tests it maybe able to know, but in general the APP will not know. Nothing is perfect but this is pretty close.

This could be seen as a way of knowing this is a bad app depending on the source of that response.

But where is it going to get one??? Safe doesn’t typically expect to have stand alone servers to respond with anything in real time. So this is automatically a red flag to a bull event. Only some very specialised APPs would ever need or could do this by having a stand alone server/machine waiting to respond and unless the user expects this then it should show the APP as bad.

Another issue is that the APP has to be able to decode this message so the keys needed are within the APP (or read by the APP) and thus when someone flags this APP as bad then some geek is likely to reverse engineer out the key and make it known to all APP stores to drop that APP.

If you see writes to MDs that were not owned by you or messages sent which are the major two ways that data can be leaked. Writing to an MD owned by you will not leak data unless its either found by chance or it was previously known. Checking for unencrypted writes to your MDs can show potential leaks of your data. And the easy way to see if encrypted by your public key is that the sandbox tries to decrypt the writes using your keys.

Obviously if data is attempted to be written with keys other than your keys then its potential leaking.

So the sandbox could be run in different modes (actual permissions allowed) and could detect the potential attempts of data leaking (or badly written APP).

2 Likes

This conversation reminds me why android went with a JVM, using it to define the sandbox and an ecosystem for apps to dwell in.

Maybe this link will be useful to the debate: Security tips  |  Android Developers

Also, Ubuntu has Apparmor which attempts to do something similar with native code: AppArmor - Ubuntu Wiki

However, considering that SAFENetwork apps may run on a multitude of different environments, it is going to be difficult to achieve either. There could be a chosen platform or framework in order to be a certified app of sorts, but it would need to run on multiple OS. Given resources, picking an existing sandbox would probably be the only realistic option too. All of which would point towards the same conclusion as Google had with android - adopt the JVM, which is already designed to run on any established hardware or OS.

JVM doesn’t mean just Java either these days and a plethora of languages are supported.

Obviously, native apps can be written which would bypass said sandbox, but the user would forgo the security and flexibility it would bring.

I would say though that even trying to create a JVM based SAFENetwork framework would be a large undertaking. Maybe people are thinking super long term here, but I would reiterate a need to be realistic and pragmatic over what can be achieved.

Of course, just having JavaScript web apps on the browser gives you a sandbox which web apps can work within. Perhaps just limiting the scope of trusted apps to those which can run in the browser is a good initial model and one which can be supported without a huge lift by SAFENetwork developers.

4 Likes

Regulating the app so it can’t write to any place other than your private files will prevent the app from leaking data, yes. But it’s an all-or-nothing-permission. So if the app is acting/writing on more than just your own files (which is pretty common)you’ll likely want some more fine grained control over what kind of data the app is able to write somewhere else. Say you want to share your photos with someone else, and allow the app to do that, but for whatever reason it also tries to share your contacts list - and that is something you’ll want to prevent. Because it needs it to write the photos, the app already has permission to write to somewhere someone else can see. Now, how do you make it write only the private data you want it to to that place?

Well, that’s the beauty of asymmetric cryptography - digging through code won’t get you the encryption keys. Or rather signature, if we’re talking response validation.
Even without standalone servers, any app that has online interaction between users could make this possible. Make the client app for everyone generate private/public key pairs, share the public key and have other clients verify whatever data is being shared with the public key. Viola, you’ve got an app that expects signed responses you can’t mock.

I also think we shouldn’t rely on manual moderation for security - when the amount of active developers and available apps grows, it is very possible that there would not be enough resources to accomplish such a thing.

Agreed, but it is very hard to modify something as essential as a permission system later on. When there are a lot of apps that already expect to automatically have access to everything, we can’t introduce a tighter privacy model without breaking those apps. So if this should work, there needs to be something from the start, or we’ll be dealing with incompatible/legacy apps forever after.

If a proper framework is too much work for now, there could be a mockup framework that apps are forced to use - They’d have to use the framework to access stuff, but the first version of the framework wouldn’t regulate anything/have no real backend. Then the backend of the framework could be added one by one so that the user actually has to grant permissions, and no app would break because they’d already be using the interfaces of the framework.

Part of the wider security model for apps will require the use of a repository or SafeStore of applications that have been vetted by the community. This requires that the apps be open source so that the code can be audited. MaidSafe has led the way on this through their use of an open source model for the core network libraries.

Debian has struggled with their free, contrib, and non-free repositories for years. From a user perspective it is a bit of a mess. Given the nature of Safe, is there any reason for an UnsafeStore? I don’t think so. Not sure how best to “lock the apps open” though other than being first to market with the SafeStore app.