Web Apps and access control

That has little to do with time-based access control in the Safe Browser because that’s not necessarily about accessing the network but e.g. accessing the camera. As long as apps can’t change the system clock, users don’t need to worry they could thwart the access restrictions, and that’s enough for this use case.

… and, while at it, I’ll keep ranting about capability based access control because that’s the only solid foundation to build on whatever access rights will look like on the surface in the end :rofl:

But the authentication is happening at a network level, not at the client, right?

If I log in on another computer, what’s happened to the rights i’ve granted to that app? How and where is this duration stored and understood by the network?

No, not this one. Also, it’s not about authentication but authorization; identity doesn’t have to matter.

Let’s say, you said “allow camera access for 2 weeks” when you authorized the app. This translated to a credential that says app 894398 has access to camera instance 42325092, signed by safe browser instance 9859234 – where the id of the Browser is randomly generated on each device and the camera id is assigned by the Browser. This credential would then be stored in the app’s private storage area (credentials to access this would need to be handed over to the app when it’s started) so it could access it whenever it needed to access the camera, at which time it would hand it over to the Browser (that is, not the vaults) that would check and allow access if everything’s fine or show an “expired, would you extend?” type of message if expired, or a stern warning if something seemed off, and so on.

Basically, all of this (except the storage of the credentials) is done on the client side, and whether the network has or has not a concept of time doesn’t matter. Also, access control could, when it made sense, depend on the device the app is running on.

Storage access is somewhat different, of course. I’m not sure time would be a problem there either, as expired credentials could be rejected well before the vault code is ever touched, similarly how IP TTL works without having to ask the upper layers of the Safe Network.

1 Like

As far as I understand, almost all ACL settings JoeSmithJr proposed are just browser based settings. Basically network does not need to know anything. I can’t imagine how would all those proposed ACL settings work with untrusted custom client code. As far as I understand it, those settings like access to camera, times based permissions, etc. are just a single config file stored on network. And this config file is loaded on browser start from network. And all those settings are applied to browser GUI, (or any other custom GUI willing to comply with it). So browser is like an operating system, managing permissions. If you use untrusted browser, you will lost that security. I imagine it like, user allows permission for reading some data for 7 days, and browser asks network for reading permission forever. And after 7 days, it will remove that permission from network automatically. Access settings to camera etc have nothing to do with network, so it is just a setting that any browser clone can use, but only official browser will do that for sure.

1 Like

Why would I want to do this? What’s the usecase for only allowing myself to read my data via a certain app for a limited time on only one computer?

How and where is this duration stored and kept track of?

@dugcampbell I’m more bullish on the first question (integrating auth into the app UI), although that might be because I haven’t given it much thought yet :slight_smile: - so mainly intuitive optimism.

You are more bullish on the second (stopping apps leaking private data). The first is a nice to have though, while the second is crucial so I want to explore that.

Let’s say I’m using an app to publish some initially unpublished data. For example, an unpublished draft blog post being published to the public blog. The app has permission to read my private data (not just draft blog posts), and when I publish a blog post I authorise it to create a public immutable file (the new public post) and modify other published files that link this into the website of the biog.

At this point the app can gather other data, from everything it has access to (a lot if we are not doing access control) so it is important that it can’t now leak that data for nefarious purposes.

I’m thinking it can create the blog post immutable data file and one or more other files we don’t know about. Maybe we can do that by requiring permission on creating every public immutable file? But what does this UX look like when publish means an app creates more than one file, or many files - we can’t ask the user for every file in such a case. So one question is: can we prevent an app that is requesting auth to create public immutable data from using this auth to leak data it has access to, under cover of publishing something else?

If we can’t, then the question becomes: can we prevent an app from leaking the locations of public immutable data that it has secretly used to store copies of our private data?

I’ve not thought much about this question but it seems like a tricky thing to achieve. For example, I think an app can easily embed a small piece of hidden data in some legitimately published Web page, image, document file etc, in order to leak the address of something else (ie an illegitimate published immutable data containing private information).

I think we need a solid solution to at least one of these questions in order to prevent ‘auth to publish’ from being abused. The first seems the least difficult, so maybe we should explore ways to prevent auth being abused to publish files that the user is not aware of.

[Above I’m thinking in terms of the existing data types, so maybe the new datatypes address some of these concerns?]

1 Like

I would prefer time limits on credentials being honored by a “sanity check” type of access control layer before any request would be handed over for processing to the actual vault code.

Such a layer could check things like:

  • do the credentials apply to the object the request is about?
    • original issuer of the root credential is the same as the owner of the object
    • if the request is for writing, does the credential allow writing?
  • if the credential expects a signature from the presenter, does it match?
  • is the credential expired? – Note: This check is logically equivalent to a TLS timestamp check, something that the Safe Network already depends on through using protocols like QUIC.

All the above are static checks about the request and the credentials, not something that needs information from the Safe Network (other than maybe looking up public keys, if not embedded in the credential itself for brevity).

I could list multiple reasons but instead let me just ask, what would the downsides be of the possibility to restricting access based on time?

Other than that, I didn’t mean restricting myself from accessing my data, but restricting the app from accessing my data or, even more, hardware on my phone.

Let’s say I set up my app to have camera and microphone access for a few (say, 3) hours at a time. I go sightseeing, take photos, go home. When I go home, I know enough time have passed that the app can no longer spy on me even if it’s a bad app.

The above example is one reason why integrating access control in the apps themselves is impossible, even if they were completely honest about this aspect of their functioning (irrational expectation, but let’s roll with it). If all apps have their own idiosyncratic ways to ask for permissions, how would anything like the above be possible?

I wasn’t asking this to be facetious, but genuinely looking for usecases to get my brain around #uxdesignerlyfe :grinning:

Just for clarity, we are talking read only here. Not rights to share or publish.

I understand you have strong reservations about embedding auth in the app UI and you might be right, but I’m focusing on one issue at a time.

1 Like

I see your edit now and this makes more sense…

This is where we are getting snagged up in the mental models of the old world, current world, and the new possibilities of safe.

When we are talking about an ‘app’ having access, what we are talking about is granting myself the ability to manipulate certain bits of data via some software. This is an important distinction.

It’s not that an app is a 3rd party, or server that I’m entrusting my data too (like current clearnet webapps). If we can think of my SAFE account as an air-gapped computer—no internet access. I would naturally feel quite comfortable installing most software on there, and editing my data, without fear of leakage.

When it comes to connecting it to the internet; or hitting send on an email, then that is where the risk starts to come in.

This is where the UX approach of Data Sharing Control in the first instance, vs Data Access Control comes in. There is far less friction and permission fatigue, and therefor I’d argue long term risk, in allowing apps read/write access by default, but having to grant explicit permission to publish or share. This is more akin to the air-gapped computer mental model.

Well, it’s demonstrably impossible to guarantee the app won’t lie (open source offers no practical solution for this). So yes, I do have strong reservations.

Haha no worries. I look at access control as a fundamentally technical problem with some serious user interface issues.

Firstly, we need a foundation that delivers on the promises and philosophy of the Safe Network: principle of least privilege, unprecedented privacy assurances, paranoia squared, and so on.

Then, it’s time to make it user friendly.

Any significantly complex app will need to access a number of different things, some read-only and some read-write. As long as it can’t write anything that’s publicly accessible, you are safe in theory. However, it’s not the app’s business what is private or not, and we humans tend to be awful at keeping track of such things. So, why not delegate it to a sufficiently capable access control system?

For example, what if I have a list of destination folders for my phone app, some of why are public, some of which are private? I could add a time limit to my private folder (“ask again if not accessed for an hour”) to make sure I don’t make silly mistakes.

Or, what if I use one of my apps to organize my photos so it has RO access to my private photos folder (sounds safe, it can’t mess up anything, right) but RW access to my public folder? It can start copying stuff from private to public, which may or may not be a good idea. To be honest, this use case requires a lot more thought than time limits and such. Basically, it’s the question of putting a virtual air-gap between different facets of life, sort of how the OS facilitated copy/paste feature of Qubes OS does it.

It’s the problem of delegation and the reason why capabilities are the way to go. ACLs have no way to deal with delegation but it comes naturally with capabilities.

How do you differentiate between the two? Both storing and publishing mean “writing stuff on the network” from a technical point of view. Unless, of course, we come up with the concept of “dedicated app storage” which sounds cute for a masters degree final project but it’s a lot less sexy for the Internet of the Future™

I think we are arguing the for the same thing really, just approaching it from different perspectives.

Yeah, this is where—as you’d probably expect—we differ.

Access control is a fundamentally human problem, with technical, and human-to-computer-interface solutions.

I don’t mean either side can be compromised on (or that the UI would be less important than the technical side for the success off the network), only that it’s the technical side that can be formalized, so it’ll always be easier to fit the UI on a specific technical solution that shoehorning a technical solution under a UI concept.

Moreover, there are not many fundamentally different technical solutions (it comes down to ACLs vs capabilities vs nothing, really) but there are almost infinite ways to turn them into user-friendly UIs.

Also, it is impossible to turn a bad technical solution (ACLs, lack of control in general, etc) into a useful one by a good UI but a working technical solution can be given newer and newer UIs until we get it right, or even different ones for different user bases or cultures.

No problem - we’re building ‘the impossible network’ :wink:

It’s easier yes, but it’s not the best way to look at it. Ultimately, these aren’t computer science problems we are trying to solve; we are trying to build a system that meets a human needs. Human to human communication goals.

No users will have a goal like “to make sure I have fine grain access control”. They’ll have a goals like “I wanna make sure I can pull off this surprise party for my friend, without a hitch”.

UI is only one layer of the whole pie.

FWIW, I think this is the right approach—to tackle one issue at a time—and learn as we go. If we just set out a “here are all the possible controls anyone might ever want” in one big lump, and then then scale back from there, I don’t think we are really considering the overall user needs, nor the context, appropriately, and we might miss out on some elegant solutions.

1 Like

That’s not what I meant though. Picking the right approach (philosophy, framework of thought, etc) that allows for “all possible controls anyone might ever want” does not equal actually implementing all those things right from the get go. All I’m talking about is avoiding to shoot ourselves in the foot.

It’s the UIs job that the user wouldn’t ever have to think about weird stuff like “fine grained access control”.

It’s the access control framework’s job to ensure that unexpected things can’t happen. Arguably, “no surprises, ever” can also be thought of as a crucial component of the UI.

As I wrote before, that settings file is stored on network and every browser instance, that I sing into will download that file. So it is shared among all my computers. And time limiting access for some time period is amazing feature because of very nature of the apps. Imagine I am using skype for texting only and I do not want it to access my camera. Than after long time my business partner wants to do a camera call, to verify that he is talking to me. I can allow that camera for let say 1 hour, and than this permission is time outed by browser. When I am using android apps, I never allow them more than I really need to. But than, after some time I want to upload my profile picture. It is one time action, I really need to do it. So I allow it. But than that app has access to my disk forever. And I am forced to manually search for that permission to disable it. Since it is uncomfortable, I seldom do that and I end up with an app, that can be bought by someone else, who can update it with malicious code and than harm my phone. So time based permission is an mechanism how to avoid future attacks. If I trust my app now, that does not mean, I will trust it in few months or even in few minutes.

Common example is a Signal app. You set there duration after messages are automatically deleted for every conversation. It is not exactly permission example, but more like data access per user example.
Time limited access to my conversation is a core feature this app is popular for.

Duration is tracked by browser. So that config has a timestamp created by browser. Browser has access to computer time, and computer time is synced with common time servers. So that time should be roughly correct. There is no sync with Safe network. Any instance of browser on any instance of my computers can timeout any permission I gave through my browser.

1 Like

I think we are arguing for the same thing. Just using different language :grinning:

1 Like

Quite likely. It’s always a good thing to see the same thing from different angles :slight_smile:

As I mentioned above, it is perfectly possible to practically address this in a similar manner to the sandbox web assembly security model. If the Safe Browser includes a default security condition that Apps and their dependencies must be open source, then on first use when the Safe Browser interprets the Apps source code to cross check against the Apps binary signature, the browser can automatically catch and eliminate the majority of side channel attack code. What is the point of the most advance UI and access control system in the world if we must trust closed source Apps are not just establishing a side channel to leak data?