Web Apps and access control

Part of the wider security model for apps will require the use of a repository or SafeStore of applications that have been vetted by the community. This requires that the apps be open source so that the code can be audited. MaidSafe has led the way on this through their use of an open source model for the core network libraries.

Debian has struggled with their free, contrib, and non-free repositories for years. From a user perspective it is a bit of a mess. Given the nature of Safe, is there any reason for an UnsafeStore? I don’t think so. Not sure how best to “lock the apps open” though other than being first to market with the SafeStore app.

I would like us to focus on this - web apps - as that is the scope of key question I raised early on (see this post) and which we have been exploring.

If people want to explore the wider question wrt desktop apps let’s spin off a new topic and cross reference where helpful. Clearly they are related and both need to be scoped out and understood.

So I’ve changed the title to reflect the Web apps focus.

I agree that you raise an important issue about user fatigue but I think we can mitigate that. I already have some thoughts in response to your highlighting the problem, but I don’t want to spend time on that for now. I’m too focused on code and have to limit the discussions I get into, so don’t take my silence as disinterest, I’m just getting other things done.

So just to push a little back in your position, let’s consider how ‘all or nothing’ it is.

Firstly, the user can be asked if any choice should be remembered (eg checkbox which defaults to off), and warned of the consequences when they change that default. They should also be able to change it, reset to defaults etc whenever they want, all in one UI (the Authenticator).

Secondly, we can differentiate between different containers, and use this to refine how carful we are with each app.

So for example, let’s say we add a _secureDocuments container alongside _documents. We can now treat apps differently depending on whether they have access to the former or not.

By which I mean, when an app that has been granted access to _secureDocuments tries to share data, we are more restrictive (eg prompt user to say its OK) but if not, we are permissive (eg default is not to ask, or just ask the first time, or ask unless the user has checked the ‘don’t ask again for this app’ box etc).

I may not continue discussing the UX aspect, important though it is, because I think it is probably soluble, but solving it is pointless unless we can figure out if it is feasible to restrict app sharing/leaking data sufficiently in the first place. I’m not yet convinced we can, but from the discussion so far it seems people mostly agree it is worth exploring. So I will certainly join in trying to find ways to do so.

5 Likes

What a nightmare…

But that doesn’t allow me to say that app XY can share my documents but not my photos (assuming it is an app that has access to and locally works with both). Or am I missing something here?
Just “XY has access to A, B and C. Is it allowed to share data?” I personally like privacy controls to be more fine grained than that.

I would just add that keeping door open for WebAssembly web apps is important for future proofing. Similar security conversations going on for wasm, and lots of exciting progress being made in rust wasm space and elsewhere…

4 Likes

Technically you don’t need shared accounts for many of discussed applications because it will be possible to have multiple owners for a MutableData object on the network. In fact, it’s already defined this way but for now it’s not possible to have more than 1 owner.

We had this before with pre-Authenticator apps: all writes and requests were proxied through SAFE Launcher and it served as a gateway that could manage permissions. However, this approach was deemed inefficient (you can read more about the reasoning behind the change here), and it’s not really necessary for fine-grained permissions control because all writes from apps will go through MaidManagers (vaults persona handling users, apps, and permissions).

Consequently, MaidManagers have a wealth of information about apps activity. So for example if an app creates a MutableData object without a user’s knowledge, the user’s MaidManager will still be able to know that an app did that – and pass that information to an app’s owner (a user).

So really there are several possible levels of control here: a client-side (an app requesting permissions from the Authenticator), a network level (MaidManagers + MutableData permssions), an encryption level (i.e. when you decide to share a data map with someone else), and a browser/web app level (considering that web apps run in a sandbox environment of the browser).

Same idea about MaidManagers applies here: it’s totally possible to impose a fine-grained control of what an app can or can not do on the network. It’s just that a set of rules that we can apply now is a bit limited by the permission control of MutableData.

It certainly is: all DOM API function calls are ultimately pass through the SAFE Browser and we have many options of handling or controlling it on the browser side. I think it even should be seen more from the standpoint of UX rather than technical ability :slight_smile:

It’s more challenging with desktop apps however, but I think that containerisation of apps will help tremendously: with technologies like Flatpak or Snap on Linux we can virtualise apps environment and disallow access to clearnet or to a user’s disk entirely.

16 Likes

Woah, this is brilliant! [quote=“nbaksalyar, post:82, topic:26023”]
Same idea about MaidManagers applies here: it’s totally possible to impose a fine-grained control of what an app can or can not do on the network. It’s just that a set of rules that we can apply now is a bit limited by the permission control of MutableData
[/quote]

Thanks @nbaksalyar, you’ve maid :wink: my day.

So the answer to my key question is actually better than I had hoped, because this means that a user can limit or monitor what and where a Web app can send data.

Plus, the SAFE part works for desktop apps too as far as SAFE goes, but obviously not if it sends data out of band, via HTTP or IP sockets, for example.

This is BIG news. :slight_smile:

It means that we can now think about the usefulness of this facility and try to design suitable UX. Again, I suggest we keep this topic just to web apps for that discussion.

9 Likes

I think UX, and the associated concerns above re: permission fatigue, are probably the biggest hurdles (outside of enabling the technical side of things). How to keep things clean and clear, given such murkiness that (I’d wager) most people are not interested in.

There are other options that could be available re: vetting of apps and permissions. WoT (as paul frazee is punting for beaker permission: https://twitter.com/pfrazee/status/1058830217798660096?s=19), could help simplify what apps to trust, out-with of any AppStore setup. So you don’t necessarily need one single authority to say ‘this is okay’ as with an app store. But you could opt to automatically trust things if both @nbaksalyar and @happybeing do (for example :stuck_out_tongue: ).

(though that may be a bit down the line).

10 Likes

We can tell that an app wrote data somewhere, but we can’t tell what that data was, can we? So the previous example of the app somehow encrypting/obscuring data it should only locally work with and then writing them somewhere else still holds, and we can’t implement a “Allow to send X data to Y” with just this.
I mean, I really would like to see a permission model “Send elsewhere” instead of “Read”, too, but as of now I still don’t see how it would be possible to implement.

To be effective, this would need to be enforced for all apps trying to access the SAFE Network, right? And there may be cases where the user actually wants the app to have access to local storage. So there’d probably be a need to have an interface that manages that, too.

On that side, I think web apps and desktop apps should be treated the same from a user perspective, the only technical difference would be that the environment differs: One lives inside the browser, and the other within a container. Both should basically have the same level of access (as little as possible, including limitations on getting user identifiers like hardware information, etc.) by default.

I think that these should only be an “implementation detail” and not something the average user can see. Having more than one place to manage app permissions will be confusing to anyone who isn’t willing or able to learn about the system more in-depth. An exception would maybe be that there should be seperate lists of applications having access to something, and other people having access.

Another question I have: How fine grained should access be? “Give access to private storage” sounds too broad.
My suggestion here would be to only give access to folders. Any app has exactly one folder it has access to by default: Its installation dir. Access to one folder oviously also means access to all subfolders, so there should probably some sort of incentive for apps to request as specific folders as possible. Like only “Images” or even only “Images from vacation 2016” instead of the whole home/private folder of the user.

2 Likes

Revisiting this in my response to @piluso’s post, a couple of thoughts on the UX of a sharing based permissions model, rather than the Android style access based permissions model currently adopted by SAFE Browser.

One sharing permissions model could be a…

Semantic Fire Wall

[cross posted from my response to @piluso’s post]…

One way is to request permissions only when sharing data, rather than before allowing access. I think this is worth exploring because it can avoid the need for apps to have any restriction on access, so no permissions requests needed until exposing data. This abandons the Android app permissions model in favour of something more like a firewall - perhaps only one app is a ‘sharing / publishing’ app, which others invoke when sharing, or which the user can use directly to publish files, folders etc.

With this in place there’s an incentive for apps not to expose data unless necessary because asking permission unnecessarily will annoy the user and make the app less attractive. Many apps will be able to avoid asking permissions altogether, because they can do their work without exposing user data. This is a good thing!

We can then imagine that most apps will be just data analysis and mashing, and that a different app will be activated when the user wants to publish or share data.

This app could work like a firewall that prompts when needed, but can also have rules set, with suitable defaults depending on the data, such as MIME types, and for RDF the kinds of ontology it uses, even the semantic properties it contains, how much data is involved, which folder it is stored in etc. So the firewall might be relaxed about blog comments, cautious about spreadsheets, and strict about wallets and money, diaries etc.

**I just invented the semantic firewall! :wink: Oh wait… Semantic Firewall

We had some discussion on this which established that it is feasible if implemented at the network level, but we have not yet discussed ways of making a good UX.

I think this is worth trying, because it could address the issues raised above by @piluso, and be generally more secure, avoiding the need for most apps to have any permissions settings at all - so long as all they do is read user or public data, and write data to the user’s storage.

Here’s the original post with discussion on implementation, but so far not on UX patterns:

4 Likes

My only concern is there should be classes of information where there still need permissions rules.

The case where a useful but malicious APP scans your data (no permissions needed so you don’t know) and then asks to write your config/preferences to “your data” (You own the MD and its encrypted). But it uses its own encryption which the app writer knows the keys to. Then the config is written to an MD at an address that can be found quickly by scanning a range of addresses.

Thus your personal data can be leaked yet as far as you were concerned it was all safe and secure.

For this reason then say the APP above is a graphics manipulation app then any access to my wallets should be forbidden by rules or being asked.

1 Like

Either that, or apps do it anyways and users become desensitive against such popups. I think we’d need some UX way to nudge things towards the former.

I like the idea of a semantic firewall - it is definetely a step in the right direction. However, how do we ensure that apps don’t hide sensitive data inside something that looks innocent enough the user grants the rights for sharing? There are known methods that can hide plain text in any image, and no human would notice the difference. And checking hashes isn’t viable since the app may actually have had a good reason to modify the image (applying a filter or whatever), it just modified it a bit more than the user wanted it to.

1 Like

I agree those are valid concerns, though perhaps can be mitigated through reputation. While we can see other issues with the access control model. Or maybe there will be a case for both? A semantic firewall could ease both sides of the access and sharing permissions models.

For now though I’m more interested in exploring how sharing control could be done and how it might work, rather than how it might fail, because we already have this data for the access control model (cf. Android app permissions).

1 Like

Only if the malicious program is found out. It is these sort of tricks that can be hidden for years. And if the graphic manipulation program is really good then its going to get a good rating.

In my opinion to have permission less reading of your private data will have problems.

To have a set of rules setup that control individual APPs and what types of private data they can read will be better.

Thus the first time I run the great APP but hidden malicious, it would ask to read my private data, and the specific types. EG ask to read my pictures, ask to read my wallets, ask to read my documents and ask to read? And then it is found out

For a good good APP then the first time it runs it ask to read my pictures since its a graphics program I let it and its added to my permission rules so the next time it is run it gets access to my pictures without having to ask me. Saves on the annoying popups every time I run it.

1 Like

Why so few mentions of capability based access control when that is the Single Right Way to go about it?

ACLs are not only broken by design but they also don’t allow for delegation and they become overly complicated real fast. ACLs became the status quo only because people who had no idea about security design thought why not just wing security because what could go wrong, right?

I started a thread before about a particular implementation of capabilities called “macaroons” but it didn’t get much attention. They have real nice properties, such as delegation while restricting access.

For example, you as the owner of your photo collection have a token that gives you full access to them. You can take this token and append a restrictions to it like “read only”, “until next Sunday”, and “to app with signature F393…AA01” (let’s say apps generate a key for themselves upon install), then give it to your app. When the network sees this token, it applies the restrictions one by one: checks the base token, checks if the request is for reading, checks if the time limit is valid, and checks if it’s signed by the right app. Iff everything matches, the data is returned.

Here’s a video that explains this better:

4 Likes

That can actually be the hash of the immutable chunk addresses of the APP itself. Since chunks cannot be changed then this hash ensures the APP has not been modified later.

Yes I’ve always time limits on rules, be they access or firewall or capability rules.

It could, but I thought it would make sense to use a random id to avoid leaking information about which app is requesting access.

The cool thing about the “macaroon” implementation of capabilities is that it works with a layered set of restrictions (caveats, as they call them) and that makes not only obvious things like that easy, but also delegating access to apps of to friends or employees is very straightforward and clean. Not so with ACLs.

Leaking to yourself? It is you accessing the APP and you want it remembered that you set these rules or whatever. There is no one else to see it is there unless you send the info off somewhere.

OH and wouldn’t you want to know what APP is requesting access, I thought that was a basic point in the matter.

I guess I am missing what you are trying to say here.

That does sound good.

The certificate would have to contain both the owner’s id and the app’s id, and they need to be readable by the recipient so they can validate the certificate. Unless all our resources are owned by single-use keys (not a bad idea, by the way) using the app’s hash would leak information about who’s using what.

I think I figured out what you got stuck on. Capabilities are not concerned about identifying you or the app, they are concerned with checking if you have the right document. It’s very different from ACLs in this regard.

From the network’s point of view, it doesn’t matter who it is that hands in that certificate with the request as long as he signed the request with a valid signature with the specified key. It’s only about authorizing the request, so we don’t need to authenticate you or the app beyond checking that signature. Had we left out this restriction, anybody who saw this certificate could just use it to look at your pictures (there may also be cases when that’s exactly what we’d want).


The certificate looks like this:

  • your id (public key)
  • which collection
  • read only
  • only until Sunday
  • valid bearer’s (the app’s) public key
  • owner’s signature of all the above

The request by the app would go something like:

  • the above certificate
  • request: list the collection
  • signed by the app

When checking, the server checks if everything matches, from whether the collection from the certificate is actually owned by the given public key and whether the signature on the certificate is valid, then the time, then whether the request fits the “read only” restriction from the certificate, and then also whether the request was signed by the key that the certificate was restricted to (that is, the app).

The network doesn’t even need to store anything about access rights other than who’s the owner.

1 Like

I find the idea of capability based access control quite interesting :smiley:
It’s worthwhile to mention that this is apparently also being looked at by the big players: Google is currently developing a new (mobile?) OS called Fuchsia, that is supposedly built around a capability based model as well.

That being said, i dunno about this from the user perspective. Either apps would have to request these capability tokens when they need them, in which case it would look pretty much the same as the permission based model from a user perspective. Or users have to “manually” create capability tokens and give them to an app, which adds lots of more steps to the process and makes it more difficult to understand for many users.
Furthermore, capabilties are still between the data and the app, which would discard the idea of prohibiting sharing of data as opposed to prohibiting access to data.

2 Likes