How to model The Future of App Permissions

I briefly brought it up in the conversation about the security of the safe launcher, but as no one has picked it up by now, I’d like to start the discussion on this. To quote what I was talking about (highlights and formatting done here for clarity):

To give an example: the facebook app asking to read my SMS might simply be so it can do the one-time phone-number authentication (I am fine allowing this) or to continuously ears-drop on all my communication with all those other ppl the app isn’t supposed to know about IMHO. Same goes for the question of whether the App can use the microphone: Sure, when I press the button to record a message, but to always listen to everything in background all day long and send it to the server?

General-Feature-based-Permission is broken by design as it doesn’t give the User enough privacy choices.

So, I continued:

As it stands today, our privacy is pretty well protected in safenet, as it is essentially a filesystem only known to the user that an app is sandboxed to only write into the users area and it takes quite some finesse to get any information out of that context (you could however – but that’s not the point here). In the whole concept of a serverless network, this is great, however, there will be occasions where you want apps to directly communicate with another “instance” of the app somewhere else: Just take the simple example of telling a “google”-like indexing mechanism that your blog was just updated and that you’d like them to index it again.

Traditionally this would have happened through a clever formatter GET or POST request to a specific endpoint of a specific server, which – in turn – would start a programme to do things. With the entire concept of moving away from a server “instance”, we are closer moving to an “actor” in the system (not as a persona within safenet but more like an “account” on safenet). This old model won’t work. Correct me, if I am wrong, but the closest I can see of that being possible soon is when we will have “messaging”, where an App could listen to incoming messages to once inbox (running on your system) and react accordingly – the messaging RFC is in the works here.

But before we start building the system now, we should stop for a moment and think about its privacy and security implications and how we can model that in a way to not have the old problems all over again. The way the launcher currently acts, you need to allow apps specific feature settings. But if we implement the messaging RFC the same way, as a feature an app might just require to work at all, we have our old tracking problem again, but worse: Not only will any authenticated app running against the launcher be able to send arbitrary information to any party (or third party for tracking), it even has a clearly identifiable user to pin this, too (as messages are always signed) – YIKES, what a privacy nightmare.

And even if we decide today to not recommend users to allow that feature on arbitrary apps, apps could easily block if they don’t have it enabled and effectively enforce bad citizenship (and as we know from experience: Tracking and Ad companies WILL DO WHATEVER IT TAKES), if it would be a possibility to do it.

Instead, I’d like to start discussing how we could build this system differently to prevent this from ever being (ab)used without the users full consent and understanding of what is going on. And I think now is a good time to start this discussion. I am looking forward to it!


One option, that might be obvious, would be to give the user, that is giving the permission(s), more options in permission control. Two examples off the top of my head are rate of feature use and permission lease time. Each time the app makes a request to use a feature that violates the previous permissions “contract” between itself and the user the launcher would deny that and similar subsequent requests until permission is then again given. This would have to be baked into the launcher or be another app between the app the user wants to use and the launcher if the need would arise.

For these permission requests or “contracts” there could also be a process for which the app (and the app dev) try to convince the user to provide the app with whatever permission it needs to operate at the time of permission request. With the combination of these two ideas the user would have control of what app can do what (as in a broad feature sense) for however long, at what rate, and etc. until the need for more permission arises.

EDIT: If there were to be an “app store” of sorts there could be app category based default white listing and black listing of permissions similar to what SDroid does on android.

1 Like

Here’s a few that I think shouldn’t be too hard to implement and would give us a great amount of control.

  • I want to be able to limit the access of an app to a particular subfolder of my safe repository.
  • I want to be able to select, on a permission basis (read/write, public/private, etc.), what an app is allowed to do.
  • i want to be able to grant a Safecoin budget for an app to use.
  • i want to be able to blacklist addresses to stop any app from sending messages/safecoins to it.
  • i want to be able to log every call made on Safe from an app.

Great topic, with important consequences. Great input so far on this thread.

Displaying permissions to users is a big issue. Broad wording doesn’t capture the effect (eg “contacts permission” what does that allow?); precise wording becomes verbose or hard to understand. Some middle ground, perhaps? Plain language explanations of the real effect of the permission is a step forward, but must accurately reflect the code the permission utilizes. Not always easy (or possible) to verify! And that’s before we get into the problem of linguistic ambiguity. Maybe the clearest way to handle this is have the permission dialog explain how disallowing the permission will degrade the functionality of the app. “Disallowing the camera permission means QR codes cannot be scanned”.

Maybe a site to outline the details of how apps use the permissions. Still display a ‘basic’ label on the permission dialog when the user installs the app, but the user can always go to the ‘app permission details’ page if they have doubts. Who maintains this? Who enforces it? Who verifies it? Big questions…

‘Request for permission’ tries to solve the problem ‘users must be able to judge the effect of software they install’. Current app permissions such as android do not solve that problem, since the effect of allowing the permission is totally unclear. I think ‘perfectly clear permissions’ is unattainable, but we can still do better than existing solutions.

At a more technical level, I think there’s a lot to be said for the P2SH multisig approach of bitcoin. It allows complex permissions to be created by users (ie a transaction script) that indicate when permission has been given for an action to be performed (ie transaction stored in the blockchain). I reckon that eventually users of safe network will be able to set ‘conditionals’ on the app permissions using some basic scripting language. It’s stupidly high overhead to do that, but also should give complete control of permissions. This may evolve a market for ‘managed permissions scripts’ to relieve some of that overhead, a bit like how we currently choose a block list for adblockers. I don’t like the ‘trusted’ solution of lists, but the alternative of either maintaining my own or not having one is much worse. At least I have a choice and am not forced and can switch easily.

In summary, I reckon permissions:

  • should describe their effect on the user
  • should be ‘appropriately’ descriptive
  • should be customizable by users

Just a simple point to consider.

We need to not have the “firewall continually asking if its OK to allow a communications” situation that we have with firewalls if we want to restrict most programs.

For instance xyz application wants to check for updates, but if one allows xyz application internet access then how does the user know that the application is not also communicating with other hosts giving away valuable info. If one denies access then they lose the update notifier, even if the user tells the app to do a check for updates. If one allows access then the risk of unchecked internet use is possible. If one gets the firewall to ask every time then there is the risk of being asked something very often if there are a number of programs running with the “ask” setting.

That’s why we might give the user the ability to give a rate limit, lease time, etc. to a certain permission. This would leave it up to the devs of any app to have to convince the user to give the app that permission under a certain rate, lease time, etc. Dev would test there app to find the optimal permission settings for functioning of an app and request that by default.

Also, I think that rather than the permission being say “access to SMS” it is permission to “verify xyz” so that rather than giving blanket permissions we are simply giving permission for the single functionality. Or permission to “check updates”.

This of course requires a more intelligent permissions system that can know what these are and need to define the procedure/protocol to do these functions. Even if these functions have to use a verifiable linked library. So set permissions to do it and the rate if applicable.

I see a general agreement here with my assessment that a binary-feature-oriented approach isn’t sufficient for what we want. The conversation focuses strongly on making it non-binary, I however, don’t think that is sufficient. Let me explain why:

Whether I want to allow an App to send information has little to do with when or how often that app has been sending before but what it wants to send right now and to whom. Especially that second question is always hard in the traditional IP/HTTP-Model, where an ip address, though it might have local attributes, doesn’t mean anything to the user. But aren’t we here fundamentally different? As I was just discussing with friend when explaining the SAFENetwork, if we don’t think of IPs to devices anymore, in this system we are instead thinking of identities – private, public-keys that people (and Apps and Devices) sign up within the system.

Similarly to the IP-Protocol, where we have the IP-Address, the RFC #0009 for Messaging similarly has a recipient-address an XorName – our systems form of an identity.

And that identity belongs to someone: the someone we are trying to send information, too – AHA! That is highly interesting and gives us a lot of privacy relevant information. If I knew that a message will be send to “Google” or “Facebook” or any other tracking tool, I could stop it.

So, what if we made permissions not only bound to the feature-app-combination but also on its “contact”. Assuming we can “lookup” some ‘profile information’ (name, public-key, maybe a profile picture & easily verifiable image generate out of the public key) with such an XorName, when an App wants to send a message for the first time, the launcher could prompt the user showing the profile and asking whether the app should be allowed to send them:

  • allow
    • once
    • allow all apps to this ID
    • always for this app for known addressbook
    • always for this app for all IDs
  • disallow
    • once (ask again)
    • disallow for this app for this ID
    • disallow all messages for APP,
    • never send to this user ever

(Once being default.)

This could be stored in an “addressbook” type system. That way you could allow an App sending messages between users but maybe being disallowed to send messages to dubious “Google” or “Facebook” or any other unknown entry. Shown in the launcher, stored within the network in the user data space.

Similarly as some firewalls allow you to control connections based on port and even IP address per App, we could force much more meta information being exposed to the user about what the App does and can do.

One further thought, I had mentioned before, was the “Progressive Web-App” approach, were you aren’t ever to assume that a feature is bluntly there and your app should always make sure to fall back if a feature can’t be found: add a feature to the launcher/API that “pretends” to send the message (and allow that setting) rather than actually doing it. Thus keeping the App itself in the dark of whether it was actually send or not until it may receive a response (or not) later.


Then add to this a stand alone application that monitors data that you share and can flag up things that you have not permitted (as in some app saving data publicly that has no business doing so).

We can build a pretty water tight system for those who need it.

Edit: users who want extra security can also manage this using separate accounts where settings are much tighter for sensitive data, or looser for convenience.