Web Apps and access control

I would just add that keeping door open for WebAssembly web apps is important for future proofing. Similar security conversations going on for wasm, and lots of exciting progress being made in rust wasm space and elsewhere…

4 Likes

Technically you don’t need shared accounts for many of discussed applications because it will be possible to have multiple owners for a MutableData object on the network. In fact, it’s already defined this way but for now it’s not possible to have more than 1 owner.

We had this before with pre-Authenticator apps: all writes and requests were proxied through SAFE Launcher and it served as a gateway that could manage permissions. However, this approach was deemed inefficient (you can read more about the reasoning behind the change here), and it’s not really necessary for fine-grained permissions control because all writes from apps will go through MaidManagers (vaults persona handling users, apps, and permissions).

Consequently, MaidManagers have a wealth of information about apps activity. So for example if an app creates a MutableData object without a user’s knowledge, the user’s MaidManager will still be able to know that an app did that – and pass that information to an app’s owner (a user).

So really there are several possible levels of control here: a client-side (an app requesting permissions from the Authenticator), a network level (MaidManagers + MutableData permssions), an encryption level (i.e. when you decide to share a data map with someone else), and a browser/web app level (considering that web apps run in a sandbox environment of the browser).

Same idea about MaidManagers applies here: it’s totally possible to impose a fine-grained control of what an app can or can not do on the network. It’s just that a set of rules that we can apply now is a bit limited by the permission control of MutableData.

It certainly is: all DOM API function calls are ultimately pass through the SAFE Browser and we have many options of handling or controlling it on the browser side. I think it even should be seen more from the standpoint of UX rather than technical ability :slight_smile:

It’s more challenging with desktop apps however, but I think that containerisation of apps will help tremendously: with technologies like Flatpak or Snap on Linux we can virtualise apps environment and disallow access to clearnet or to a user’s disk entirely.

16 Likes

Woah, this is brilliant! [quote=“nbaksalyar, post:82, topic:26023”]
Same idea about MaidManagers applies here: it’s totally possible to impose a fine-grained control of what an app can or can not do on the network. It’s just that a set of rules that we can apply now is a bit limited by the permission control of MutableData
[/quote]

Thanks @nbaksalyar, you’ve maid :wink: my day.

So the answer to my key question is actually better than I had hoped, because this means that a user can limit or monitor what and where a Web app can send data.

Plus, the SAFE part works for desktop apps too as far as SAFE goes, but obviously not if it sends data out of band, via HTTP or IP sockets, for example.

This is BIG news. :slight_smile:

It means that we can now think about the usefulness of this facility and try to design suitable UX. Again, I suggest we keep this topic just to web apps for that discussion.

9 Likes

I think UX, and the associated concerns above re: permission fatigue, are probably the biggest hurdles (outside of enabling the technical side of things). How to keep things clean and clear, given such murkiness that (I’d wager) most people are not interested in.

There are other options that could be available re: vetting of apps and permissions. WoT (as paul frazee is punting for beaker permission: https://twitter.com/pfrazee/status/1058830217798660096?s=19), could help simplify what apps to trust, out-with of any AppStore setup. So you don’t necessarily need one single authority to say ‘this is okay’ as with an app store. But you could opt to automatically trust things if both @nbaksalyar and @happybeing do (for example :stuck_out_tongue: ).

(though that may be a bit down the line).

10 Likes

We can tell that an app wrote data somewhere, but we can’t tell what that data was, can we? So the previous example of the app somehow encrypting/obscuring data it should only locally work with and then writing them somewhere else still holds, and we can’t implement a “Allow to send X data to Y” with just this.
I mean, I really would like to see a permission model “Send elsewhere” instead of “Read”, too, but as of now I still don’t see how it would be possible to implement.

To be effective, this would need to be enforced for all apps trying to access the SAFE Network, right? And there may be cases where the user actually wants the app to have access to local storage. So there’d probably be a need to have an interface that manages that, too.

On that side, I think web apps and desktop apps should be treated the same from a user perspective, the only technical difference would be that the environment differs: One lives inside the browser, and the other within a container. Both should basically have the same level of access (as little as possible, including limitations on getting user identifiers like hardware information, etc.) by default.

I think that these should only be an “implementation detail” and not something the average user can see. Having more than one place to manage app permissions will be confusing to anyone who isn’t willing or able to learn about the system more in-depth. An exception would maybe be that there should be seperate lists of applications having access to something, and other people having access.

Another question I have: How fine grained should access be? “Give access to private storage” sounds too broad.
My suggestion here would be to only give access to folders. Any app has exactly one folder it has access to by default: Its installation dir. Access to one folder oviously also means access to all subfolders, so there should probably some sort of incentive for apps to request as specific folders as possible. Like only “Images” or even only “Images from vacation 2016” instead of the whole home/private folder of the user.

2 Likes

Revisiting this in my response to @piluso’s post, a couple of thoughts on the UX of a sharing based permissions model, rather than the Android style access based permissions model currently adopted by SAFE Browser.

One sharing permissions model could be a…

Semantic Fire Wall

[cross posted from my response to @piluso’s post]…

One way is to request permissions only when sharing data, rather than before allowing access. I think this is worth exploring because it can avoid the need for apps to have any restriction on access, so no permissions requests needed until exposing data. This abandons the Android app permissions model in favour of something more like a firewall - perhaps only one app is a ‘sharing / publishing’ app, which others invoke when sharing, or which the user can use directly to publish files, folders etc.

With this in place there’s an incentive for apps not to expose data unless necessary because asking permission unnecessarily will annoy the user and make the app less attractive. Many apps will be able to avoid asking permissions altogether, because they can do their work without exposing user data. This is a good thing!

We can then imagine that most apps will be just data analysis and mashing, and that a different app will be activated when the user wants to publish or share data.

This app could work like a firewall that prompts when needed, but can also have rules set, with suitable defaults depending on the data, such as MIME types, and for RDF the kinds of ontology it uses, even the semantic properties it contains, how much data is involved, which folder it is stored in etc. So the firewall might be relaxed about blog comments, cautious about spreadsheets, and strict about wallets and money, diaries etc.

**I just invented the semantic firewall! :wink: Oh wait… Semantic Firewall

We had some discussion on this which established that it is feasible if implemented at the network level, but we have not yet discussed ways of making a good UX.

I think this is worth trying, because it could address the issues raised above by @piluso, and be generally more secure, avoiding the need for most apps to have any permissions settings at all - so long as all they do is read user or public data, and write data to the user’s storage.

Here’s the original post with discussion on implementation, but so far not on UX patterns:

4 Likes

My only concern is there should be classes of information where there still need permissions rules.

The case where a useful but malicious APP scans your data (no permissions needed so you don’t know) and then asks to write your config/preferences to “your data” (You own the MD and its encrypted). But it uses its own encryption which the app writer knows the keys to. Then the config is written to an MD at an address that can be found quickly by scanning a range of addresses.

Thus your personal data can be leaked yet as far as you were concerned it was all safe and secure.

For this reason then say the APP above is a graphics manipulation app then any access to my wallets should be forbidden by rules or being asked.

1 Like

Either that, or apps do it anyways and users become desensitive against such popups. I think we’d need some UX way to nudge things towards the former.

I like the idea of a semantic firewall - it is definetely a step in the right direction. However, how do we ensure that apps don’t hide sensitive data inside something that looks innocent enough the user grants the rights for sharing? There are known methods that can hide plain text in any image, and no human would notice the difference. And checking hashes isn’t viable since the app may actually have had a good reason to modify the image (applying a filter or whatever), it just modified it a bit more than the user wanted it to.

1 Like

I agree those are valid concerns, though perhaps can be mitigated through reputation. While we can see other issues with the access control model. Or maybe there will be a case for both? A semantic firewall could ease both sides of the access and sharing permissions models.

For now though I’m more interested in exploring how sharing control could be done and how it might work, rather than how it might fail, because we already have this data for the access control model (cf. Android app permissions).

1 Like

Only if the malicious program is found out. It is these sort of tricks that can be hidden for years. And if the graphic manipulation program is really good then its going to get a good rating.

In my opinion to have permission less reading of your private data will have problems.

To have a set of rules setup that control individual APPs and what types of private data they can read will be better.

Thus the first time I run the great APP but hidden malicious, it would ask to read my private data, and the specific types. EG ask to read my pictures, ask to read my wallets, ask to read my documents and ask to read? And then it is found out

For a good good APP then the first time it runs it ask to read my pictures since its a graphics program I let it and its added to my permission rules so the next time it is run it gets access to my pictures without having to ask me. Saves on the annoying popups every time I run it.

1 Like

Why so few mentions of capability based access control when that is the Single Right Way to go about it?

ACLs are not only broken by design but they also don’t allow for delegation and they become overly complicated real fast. ACLs became the status quo only because people who had no idea about security design thought why not just wing security because what could go wrong, right?

I started a thread before about a particular implementation of capabilities called “macaroons” but it didn’t get much attention. They have real nice properties, such as delegation while restricting access.

For example, you as the owner of your photo collection have a token that gives you full access to them. You can take this token and append a restrictions to it like “read only”, “until next Sunday”, and “to app with signature F393…AA01” (let’s say apps generate a key for themselves upon install), then give it to your app. When the network sees this token, it applies the restrictions one by one: checks the base token, checks if the request is for reading, checks if the time limit is valid, and checks if it’s signed by the right app. Iff everything matches, the data is returned.

Here’s a video that explains this better:

4 Likes

That can actually be the hash of the immutable chunk addresses of the APP itself. Since chunks cannot be changed then this hash ensures the APP has not been modified later.

Yes I’ve always time limits on rules, be they access or firewall or capability rules.

It could, but I thought it would make sense to use a random id to avoid leaking information about which app is requesting access.

The cool thing about the “macaroon” implementation of capabilities is that it works with a layered set of restrictions (caveats, as they call them) and that makes not only obvious things like that easy, but also delegating access to apps of to friends or employees is very straightforward and clean. Not so with ACLs.

Leaking to yourself? It is you accessing the APP and you want it remembered that you set these rules or whatever. There is no one else to see it is there unless you send the info off somewhere.

OH and wouldn’t you want to know what APP is requesting access, I thought that was a basic point in the matter.

I guess I am missing what you are trying to say here.

That does sound good.

The certificate would have to contain both the owner’s id and the app’s id, and they need to be readable by the recipient so they can validate the certificate. Unless all our resources are owned by single-use keys (not a bad idea, by the way) using the app’s hash would leak information about who’s using what.

I think I figured out what you got stuck on. Capabilities are not concerned about identifying you or the app, they are concerned with checking if you have the right document. It’s very different from ACLs in this regard.

From the network’s point of view, it doesn’t matter who it is that hands in that certificate with the request as long as he signed the request with a valid signature with the specified key. It’s only about authorizing the request, so we don’t need to authenticate you or the app beyond checking that signature. Had we left out this restriction, anybody who saw this certificate could just use it to look at your pictures (there may also be cases when that’s exactly what we’d want).


The certificate looks like this:

  • your id (public key)
  • which collection
  • read only
  • only until Sunday
  • valid bearer’s (the app’s) public key
  • owner’s signature of all the above

The request by the app would go something like:

  • the above certificate
  • request: list the collection
  • signed by the app

When checking, the server checks if everything matches, from whether the collection from the certificate is actually owned by the given public key and whether the signature on the certificate is valid, then the time, then whether the request fits the “read only” restriction from the certificate, and then also whether the request was signed by the key that the certificate was restricted to (that is, the app).

The network doesn’t even need to store anything about access rights other than who’s the owner.

1 Like

I find the idea of capability based access control quite interesting :smiley:
It’s worthwhile to mention that this is apparently also being looked at by the big players: Google is currently developing a new (mobile?) OS called Fuchsia, that is supposedly built around a capability based model as well.

That being said, i dunno about this from the user perspective. Either apps would have to request these capability tokens when they need them, in which case it would look pretty much the same as the permission based model from a user perspective. Or users have to “manually” create capability tokens and give them to an app, which adds lots of more steps to the process and makes it more difficult to understand for many users.
Furthermore, capabilties are still between the data and the app, which would discard the idea of prohibiting sharing of data as opposed to prohibiting access to data.

2 Likes

It would look a lot more natural than an ACL based model. Instead of worrying about weird stuff like groups and users and such, we could define access in a more natural way, such as “you can look at my photos until next Friday”. This makes a lot more sense than “let’s add user x to group z because I think group z has read access to photos; oh by the way, let’s not forget to remove him next Friday.”

Let’s not forget we’re talking about security so it isn’t about how things look but what works.

Nevertheless, the GUI must be as frictionless as possible.

This is a permission based model, but the permission is represented by a document given to an actor instead of a set of rules stored next to the object in some sort of database.

In other words, we not only get more control but also without having to come up with a way to store the permissions because users (well, their apps) will store them for themselves.

Yes. For example, your newly installed photo gallery app will have to request read access to your photo collection, which you will give it, potentially restricting it to a certain time period or something.

That would be silly. It has to cause as little friction as possible and that’s easier than with an ACL type of permission model because the everyday person is more familiar with capabilities: driver licenses, passports, bank cards, and the like.

We need to acknowledge that there is no way to prohibit sharing. A user could share their password or just send a copy of the data. Some battles can’t be won, it’s that simple.

Capabilities make sharing or delegation explicit because you either add a caveat for demanding a specific signature or not. If you do, you expect (though can’t enforce) that a only a certain entity (an app instance on your device or a friend or your boss) will use the certificate. If you don’t add such a caveat, anybody who sees the certificate will also be able to use it.

So, capabilities are not a bit less secure than ACLs in this regard and both much stronger and much more “capable” in others, such as delegation, which is impossible with ACLs.

2 Likes

Revisiting this topic to think about the UX for share / publish controls (rather than the current model of access controls.

@dugcampbell says that Maidsafe are going to test out some UX ideas on this approach to see what we can come up with.

My first thought was to try and avoid confirmation dialogues when you want to publish or share, because while it is an obvious solution, it isn’t great UX as we can see from the current Authenticator.

So my first question is: can we integrate the grant of permission within the application itself? So there would be no context switch (Authenticator popup), and we can know that the user understood the consequences of the action they took. I don’t have a solution to this, and I suspect it isn’t possible, but I think it would be the ideal solution if it were.

Then I thought of a tricky problem which is not present in the current access control model: when the user grants permission to share or publish something, how can we know that the application doesn’t leak different data than the user intended? This seems especially tricky if by default the application has access to everything. This makes me wonder if the idea is feasible at all. Any bright ideas?

I guess we will always need trust to some degree, but I’m concerned this blows a big hole in the idea of controlling what an app can share/publish. We might as well remove the control mechanism altogether and rely solely on trust if there is no way to ensure an application only shares what the user intended.

2 Likes

I have a lot of different ideas to be honest, with pros and cons for each. All are about capabilities, where apps would attach credentials to each requests. Whether a request got granted would depend on whether the credentials are sufficient.

I strongy disagree. Access control must be unforgable, that is, we need a way to give users assurance that they are accessing legitimate access control settings, and that calls for a context switch, a change from the app to something that’s visibly and obviously part of the “system” and not of the app. Anything else is an invitation for exploitation.

Users should never be expected to trust client software apart from two (which may be one): the one access control is managed through (which may or may not be the Browser), and the one that checks that those access rights are enforced (the Browser).

So, I will assume Apps will run in the Safe Browser that will act as a kind of operating system, enforcing access control restrictions between the apps and the device. The Browser will be the client side of access control, the vaults on the network will be the “server” side (excuse me for using swear words here).

It could be somewhat possible if we had data handlers chained after the apps. For example, a photo app could attach the GPS coordinates to an image but it would still not reach the network if the photo uploader service were configured to remove GPS coordinates from the metadata of image files.

The above example is already forced because well-designed access control would already have stopped the app from gaining access to the GPS coordinates.

Imagine something like this:

  • You install an app. Let’s say it’s a chat/phone app.
  • The app asks for a bunch of permissions:
    • Acces to your contacts: you select your “friends and family” group from your public identity.
    • A network location to store data, things like message logs, screenshots, files you sent and that were sent to you: you specify the network directory in the Authenticator but what the App will get is a credential to access that directory; this will be checked by the vaults. At this point, you may add additional restrictions, for example an expiry date that will be checked by the browser (that is, on the client side, not by the vaults).
    • Access to the camera: you select only the selfie camera (the app can’t access your main camera, and it doesn’t know if it exists) and specify that you want to get a confirmation popup whenever the App tries to access it.
    • Something similar for the mic, but let’s spice it up: What if we could also add “filters” to distort your voice? Not necessarily a good example for chat with friends and family, but things like that would be great for whistleblowers who could be sure the app would only receive a distorted version of their voice, nothing directly from the mic.
  • The app receives the necessary credentials that it will need to hand over to the Safe Browser whenever it requests access to hardware or network resources. The app doesn’t have to know anything about the exact content of those credentials or how many and what kind of restrictions they contain, only which one to present when it wants to access the camera or when it wants to send a message to a contact.

We would have an authorization system where the details of the access restrictions would no longer be a concern of the apps.

On the other side, our authentication would be as detailed as we wanted, and everything would be in the user’s control.

Moreover, the UI would be a separate component that could be changed, refined, enriched, or streamlined independently of the apps whenever necessary, and there could be different versions for control freaks, tinfoil hatters, and regular users. Adding new types of controls would be a non-issue too.

5 Likes

I would just suggest all apps/websites/etc on the safe network to be open source and there be logs of every command and every operation and the user can see them or send them to their security IT and let him check them.