Web Apps and access control

Logging and open source have nothing to do with security.

Why should people publish their code? How would you enforce it? f you can’t enforce it, what’s the point?

It’s possible without open source. The Safe Browser could log API calls but, if you had control of the OS, you could also just run the apps through strace/ltrace or similar and have a full log of everything they do. Would it help though?

For one, do you have any idea how much data those logs would be? Where would they be stored? Would you put up with the performance hit?

Who would want to go over “every command and every operation” their device executed? Who would have the time even if they wanted to? Going thorough the logs takes a lot more time than generating them so you’d have about 5 minutes of action and the rest of the day would be spent on the logs.

So, is it either security or privacy now?


Those are just the minor problems with the idea.

There are many things that shouldn’t happen at all not just shouldn’t happen unnoticed. If a bad app on a girl’s phone takes pictures of her and they end up on a bad site, she will learn about it with or without the log files eventually. Will it help her?

No, the only sensible approach to security is the principle of least privilege. Firstly, it’s unacceptable for an app to have access to anything it’s not explicitly authorized for. Secondly, no app should be authorized for anything that isn’t necessary for its functioning.

The question is not if it will be “too inconvenient” or other bullshit like that but how to make it happen without making it too inconvenient. It’s the Safe Network, after all. What’s the point of a rock-solid foundation if we plan to build sandcastles on top of it???

1 Like

Great ideas here, have enjoyed reading this thread and excited to see the solutions and UI as it evolves.

If Apps run in our “client side” browser as you put it, then we as a distributed community do have the power to “enforce” open source via incentives. We could incentivize all sorts of sandbox conditions on the code running in our Safe Browser “operating system”, and when done en mass collectively be force to be reckoned with. Apart from only allowing open source Apps to run in our Safe browsers, the community could also require that any App’s source code meets minimum license, presentation and formatting standards - no obfuscated closed license code etc. If any App wants to even be visible to our Browsers view of the distributed App store, then they also cannot contain any upstream packages/crates/code that can do an end run around security and authentication mechanisms in the Safe browser. No TCP/IP code that can open direct or indirect communications channels for example, restricted sandboxed instruction set only. Any and all communication can only be handed off and handled via Safe API’s reducing a malicious Apps ability to game the system.

Of course this is an open distributed permission-less system you can’t force anybody to do anything nor prevent users turning to opaque closed source App’s if they want to, and that is Ok there will always be exceptions. However if the default sandbox settings people get out of the box enforce things like I mentioned above, and all other App’s violating any of those conditions do not even appear to us in our view of the distributed App store then we have created a big incentive biased towards developing compliant Safe Apps, and all without any central authority. This is maintained while the majority of individuals continue to agree that the default conditions are a good idea and serve to protect our individual interests. The increasingly popular supply chain attacks could be mitigated using this method as well thanks for Safe’s immutable storage of the App code combined with checks on any and all App updates. The low hanging fruit would be automated checks (code formatting, no socket code etc) but automated code screening can only get you so far with security. Later on there is no reason why there could not be a more advanced conditions available such as: “Only show apps than have passed security reviews by 3 of 5 of these selected security experts/companies” (that I trust/Tip/donate/subscribe to), Tick.

Safe Browser coupled with WebAssembly I think it could be Safe Networks “killer app” - stress free security for the end user. Run any App you see without having to be a security expert, trusting the developer 100% or stressing the last App update is now allowing collecting and selling of your info.

Outside the Safe Browser Operating System all bet’s are off I do not see how this system would be feasible.

2 Likes

What community? You’re talking about something that does not exist. It makes about as much sense to talk about “we as a community” as it does talking about “all internet users as a community” – it’s a cute phrase that describes something that isn’t real.

Just thing about it. Not even our tiny forum community can agree. I, for one, certainly don’t agree that apps should be forced to be open source even if it was possible, which it clearly isn’t.

Moreover, any attempt to impose any such rules on the users of the network goes against its fundamental principles. Even if it was expressed as some default settings as you suggested.

Also, why is open source so important? Apps should be safe not because we could dissect them to see what they do but because we had absolute certainty that they couldn’t do anything we didn’t authorized them to do.

We will have to agree to disagree then. IMO this forum represents a community which shares a fairly specific vision for the future internet - small, focused and very specific community. If this community decides that the Safe Browser Operating System should ship with default sandbox conditions that protect the individual rights and interests to uphold security and freedom of end users, then App developers will have a very hard time going against that grain. Go ahead and close source their code, load it with whatever security violating code they please - but it not only will not run by default on my (or any other) Safe Browser, it will not even be visible in the distributed App store. Unless of course each individual starts unticking default safety conditions and ignoring the red warning signs that is.

well for one there should be a “security program” that will be built by the maidsafecompany or the community that will understand malicious code if what I say would happen (logs and all commands opensource)

well the safe network is a storage medium and if its important it is worth spending storage for it!

what I mean with opensource is that the user or a security program or a security IT could see what does the app/website do with the data so he can determine if its malicious/privacy attacking

once you give access to a private thing of yours a closes source program may log it in their servers and keep a file of all users and sell the information so back in the old web where anyone get your data THAT YOU GAVE and sold it in advertisers,

with open source and logging everycommand, a audit of a security IT, security program or some kind of auditing of the process will give you answers of where does your data go, get stored, get send etc.

edit1: with opensource we ensure that the data we give des what you say: quote you: “absolute certainty that they couldn’t do anything we didn’t authorized them to do.”

I think this would certainly be the UX with the least friction, and the most natural to use (i.e. all software is driven by the user, the decisions/actions are within her control, no context switching required) but at the moment I’m not seeing a way this possible. Although there may be some aspects to the way we approach the UI for this that might help get us a little closer, it’s still gonna remain a tough gig. Particularly on mobile I’d think.

Well, I’m not the guy to give a definitive answer on this, but we are commencing our first tranche of UX in this area on the assumption that this is possible, and I don’t see why it wouldn’t be. If it is possible to give fine grained control in the ACL model, then the same should be feasible in the DSC model too.

For example, an email application has read and write access by default, so I can GET my inbox, and compose a reply. But when I’m ready to send the the email, I’m authorising the app to share that bit of data—that particular email—not just opening up all the taps on all data.

1 Like

This is what we have at the moment, and it’s not being proposed to change this. But just to clarify, the access control is managed through the authenticator which is currently in the Browser, but should really be thought of as a separate thing, a gatekeeper. The apps that are authenticated could be web apps, sovereign web apps, or native desktop apps, it wouldn’t matter.

Yeah, this is the model we currently have, and will have in the network. The question is really where, when, and how the user is put in control of these levers; how they manage risk; and how this effects the usability of the network, and therefore the usefulness of the system in helping people achieve their tasks.

It’s easy to conflate giving someone control over security, and actually giving them security. In the middle of those two things if the human’s interaction with the system; which is the important bit that we need to understand and design for.

For example; these GDPR cookie controls that proliferate the web now (in Europe, but I presume other folk get them too). This is the hell you get on nearly every single website:

ezgif-4-49835441ad7d

This is what being given fine grained control up-front looks like. And there is a reason why most people just click the Accept All button as standard; ostensibly making them less secure.

This is the sort of thing we should be aiming to avoid: every single app giving the users these choices up front whether they need to be made, or affect the user’s security or not. This is the permissions fatigue we are talking about and what we should be trying to avoid. Otherwise, people will just get used to clicking Authorise all the time, and not be focusing their attention on the real risks. Malicious actors know the psychology of this very well, and use it to their advantage (e.g. behold the dark patterns that proliferate the cookie consent and EULA domains).

This is why we are exploring the Data Sharing Control model in the first instance: focusing the users decision making to where the risk is. That’s not to say we are taking away control either, they’d still be able to have that fine grained control should they want it, where just not going to force those decisions up front where they need not be necessary.

Take this as an example:

So, what’s the risk if an app, has access to your camera in the DSC model? It can take a bunch of photos of you and save them to your private data, that only you have access too. Strange, and possibly inconvenient; but not so risky.

Now, should the app want to take a photo of you and publish it then that’s where the risk it, so that’s where you are prompted to take action.

“You are about to publish this image with App X, do you agree?”

Well, thanks for demonstrating what I’m talking about: the broken state of internet security that we could finally fix.

If apps that run in the Safe Browser weren’t able to do anything but what they are explicitly authorized to, then there’s no such thing as “logging it in their servers” because they would have no access to their servers.

In other words, it’s completely safe to add read access for an app to your private data if you can be sure that app has no write access to stuff you don’t control.

No, you do not. You ensure there’s a possibility that, having the time and resources, you could check if it does what it says. It’s a nice thing, it’s just NOT SUFFICIENT. In other words, it’s just a slightly perverted version of “security through obscurity”.

There is no other way than provable and stringent access control.

1 Like

Those two are different only in our eyes, not from a technical point of view. “My private data” is not a technical concept (or, it shouldn’t be) just an interpretation of the situation when only I have read and write access to a piece of information.

Such a thing can be checked and ensured with ACLs where access rights are stored with the objects, but that’s about where the benefits of ACLs end; they are a broken model of access control and they should be forgotten, abolished, and cast into the pits of hell.

Yes, this is what I meant before when I, somewhat angrily, wrote:

The question is not if it will be “too inconvenient” […] but how to make it happen without making it too inconvenient. It’s the Safe Network, after all. What’s the point of a rock-solid foundation if we plan to build sandcastles on top of it???

Doing access control right will always be inconvenient. However, it doesn’t have to be inconvenient to the users if the developers take upon themselves much of the inconvenience by putting the necessary amount of thought into how to make it better.

Examples:

  • If I set “allow camera access for 2 weeks” and it expires, then the authenticator app should suggest me to extend it by a month, then by two months or a half year, and so on.
  • If I already set “allow camera access” to 2 apps from the same developer, it could suggest “always allow camera access to apps from this producer” on the 3rd.
  • Have settings like: “if video recording is allowed, microphone is also allowed”, “always allow read access to public resources”, and similar.
  • Have a few access profiles to pick a default from: “paranoid” where everything must be manually set, “strict” when few simple rules would automatically apply and most would be asked, “lazy” when just the real serious things would be asked about.

And so on. There are so many possible ways to make these things more convenient without having to break security at a fundamental layer of the design.

2 Likes

Minor point, but there is no time on the network, so this kind of thing will not be possible.

An alternative might be “All this app to take 10 photos”.

Yeah, this is what we are embarking on (and indeed discussing).

That has little to do with time-based access control in the Safe Browser because that’s not necessarily about accessing the network but e.g. accessing the camera. As long as apps can’t change the system clock, users don’t need to worry they could thwart the access restrictions, and that’s enough for this use case.

… and, while at it, I’ll keep ranting about capability based access control because that’s the only solid foundation to build on whatever access rights will look like on the surface in the end :rofl:

But the authentication is happening at a network level, not at the client, right?

If I log in on another computer, what’s happened to the rights i’ve granted to that app? How and where is this duration stored and understood by the network?

No, not this one. Also, it’s not about authentication but authorization; identity doesn’t have to matter.

Let’s say, you said “allow camera access for 2 weeks” when you authorized the app. This translated to a credential that says app 894398 has access to camera instance 42325092, signed by safe browser instance 9859234 – where the id of the Browser is randomly generated on each device and the camera id is assigned by the Browser. This credential would then be stored in the app’s private storage area (credentials to access this would need to be handed over to the app when it’s started) so it could access it whenever it needed to access the camera, at which time it would hand it over to the Browser (that is, not the vaults) that would check and allow access if everything’s fine or show an “expired, would you extend?” type of message if expired, or a stern warning if something seemed off, and so on.

Basically, all of this (except the storage of the credentials) is done on the client side, and whether the network has or has not a concept of time doesn’t matter. Also, access control could, when it made sense, depend on the device the app is running on.

Storage access is somewhat different, of course. I’m not sure time would be a problem there either, as expired credentials could be rejected well before the vault code is ever touched, similarly how IP TTL works without having to ask the upper layers of the Safe Network.

1 Like

Why would I want to do this? What’s the usecase for only allowing myself to read my data via a certain app for a limited time on only one computer?

How and where is this duration stored and kept track of?

@dugcampbell I’m more bullish on the first question (integrating auth into the app UI), although that might be because I haven’t given it much thought yet :slight_smile: - so mainly intuitive optimism.

You are more bullish on the second (stopping apps leaking private data). The first is a nice to have though, while the second is crucial so I want to explore that.

Let’s say I’m using an app to publish some initially unpublished data. For example, an unpublished draft blog post being published to the public blog. The app has permission to read my private data (not just draft blog posts), and when I publish a blog post I authorise it to create a public immutable file (the new public post) and modify other published files that link this into the website of the biog.

At this point the app can gather other data, from everything it has access to (a lot if we are not doing access control) so it is important that it can’t now leak that data for nefarious purposes.

I’m thinking it can create the blog post immutable data file and one or more other files we don’t know about. Maybe we can do that by requiring permission on creating every public immutable file? But what does this UX look like when publish means an app creates more than one file, or many files - we can’t ask the user for every file in such a case. So one question is: can we prevent an app that is requesting auth to create public immutable data from using this auth to leak data it has access to, under cover of publishing something else?

If we can’t, then the question becomes: can we prevent an app from leaking the locations of public immutable data that it has secretly used to store copies of our private data?

I’ve not thought much about this question but it seems like a tricky thing to achieve. For example, I think an app can easily embed a small piece of hidden data in some legitimately published Web page, image, document file etc, in order to leak the address of something else (ie an illegitimate published immutable data containing private information).

I think we need a solid solution to at least one of these questions in order to prevent ‘auth to publish’ from being abused. The first seems the least difficult, so maybe we should explore ways to prevent auth being abused to publish files that the user is not aware of.

[Above I’m thinking in terms of the existing data types, so maybe the new datatypes address some of these concerns?]

1 Like

I would prefer time limits on credentials being honored by a “sanity check” type of access control layer before any request would be handed over for processing to the actual vault code.

Such a layer could check things like:

  • do the credentials apply to the object the request is about?
    • original issuer of the root credential is the same as the owner of the object
    • if the request is for writing, does the credential allow writing?
  • if the credential expects a signature from the presenter, does it match?
  • is the credential expired? – Note: This check is logically equivalent to a TLS timestamp check, something that the Safe Network already depends on through using protocols like QUIC.

All the above are static checks about the request and the credentials, not something that needs information from the Safe Network (other than maybe looking up public keys, if not embedded in the credential itself for brevity).

I could list multiple reasons but instead let me just ask, what would the downsides be of the possibility to restricting access based on time?

Other than that, I didn’t mean restricting myself from accessing my data, but restricting the app from accessing my data or, even more, hardware on my phone.

Let’s say I set up my app to have camera and microphone access for a few (say, 3) hours at a time. I go sightseeing, take photos, go home. When I go home, I know enough time have passed that the app can no longer spy on me even if it’s a bad app.

The above example is one reason why integrating access control in the apps themselves is impossible, even if they were completely honest about this aspect of their functioning (irrational expectation, but let’s roll with it). If all apps have their own idiosyncratic ways to ask for permissions, how would anything like the above be possible?

I wasn’t asking this to be facetious, but genuinely looking for usecases to get my brain around #uxdesignerlyfe :grinning:

Just for clarity, we are talking read only here. Not rights to share or publish.

I understand you have strong reservations about embedding auth in the app UI and you might be right, but I’m focusing on one issue at a time.

1 Like

I see your edit now and this makes more sense…

This is where we are getting snagged up in the mental models of the old world, current world, and the new possibilities of safe.

When we are talking about an ‘app’ having access, what we are talking about is granting myself the ability to manipulate certain bits of data via some software. This is an important distinction.

It’s not that an app is a 3rd party, or server that I’m entrusting my data too (like current clearnet webapps). If we can think of my SAFE account as an air-gapped computer—no internet access. I would naturally feel quite comfortable installing most software on there, and editing my data, without fear of leakage.

When it comes to connecting it to the internet; or hitting send on an email, then that is where the risk starts to come in.

This is where the UX approach of Data Sharing Control in the first instance, vs Data Access Control comes in. There is far less friction and permission fatigue, and therefor I’d argue long term risk, in allowing apps read/write access by default, but having to grant explicit permission to publish or share. This is more akin to the air-gapped computer mental model.