Cool cool, sounds great! Let me breakdown some of my thinking then, and try and explain some of the differences in approach we’ve had, so we can compare notes…
Data Access Controls vs Data Sharing Controls
The main thing that occurred to me when digesting the solutions in your thesis was the difference in approach we’re taking to controls over access to data. Certainly worth some discussion. There is some subtlety here, so let me try and lay out solution we’re proposing.
While it’s often touted as a selling point of the decentralised web that users with have fine grained control over access to their data, this is very much a double edged sword, and if we don’t take advantage of some of the inherent data security features of something like SAFE, and simply provide more controls over existing clearnet mental models, then we will end up with quite a painful experience for users. A continuous stream of authorisation requests, decisions, alerts, and ultimately permission fatigue resulting in increased risk; the opposite of what we want.
It’s for this reason, after a great deal of discussion, our starting point for how to tackle these controls was based on putting deliberate controls in the places where there is most risk to a users data: when it is to be
I noted that you haven’t dealt with granting permissions to
Apps within your solution, only access controls for other humans/organisations. Is that to do with a Solid implementation, or some other reason?
While it is absolutely true that apps are much much more like software of old: a user is simply using software to manipulate her own data—an app is merely a view, or window over a set of data—there still remains some potential for malicious apps to do malicious things, like stealing or leaking personal information, or publishing things when I don’t want them to be published. Hence the need for some controls over what an app can do, and when.
We’ve taken the view that by default (I’ll come on to some of the nuance of this later), a user should be able to have an app freely create, view, and edit private data, without any intervention required, but explicit authorisation would be needed before the app could be used to share data with other people (or send anywhere outside of the users private space), or publish it.
So this means that if a user wants to Publish any data, share any data with a 3rd party (e.g. allowing a friend to access a photo I’ve uploaded) , or ‘send’ any data anywhere (e.g. sending an email) then they would need to specifically allow this via the Authenticator.
You could see all this as a users Private, Unpublished data space on the network being like hard drive on their old-skool air-gapped desktop computer. They can insert a floppy disk and install some software, and use that software happily to read, create, edit any of their data on the machine, without ‘granting the app permission’ for them to do it. It’s a given.
The risk comes (and therefore the explicit authentication) when they connect that computer to the Internet and want to send that data somewhere, or publish it, or share it with someone else. That’s opt-in, and driven through deliberate consent.
Varying degrees of intervention
So, in order to give a low friction, understandable experience when placing permissions controls where the risks are presented, we’re proposing varying degrees of intervention into the user’s journey, or flow. This is all just what we a sensible default (a user could choose to to configure these levels of intervention to suit their needs), a balance between control and usability. Let me lay them out:
No Intervention: Authorisation granted without notifying the user.
This would be used for actions that carry low, or no risk to the user’s security, or privacy.
For example, an App could be granted Read and Write access to private data that the user has created via that app, without the user being notified in real time, nor providing up-front consent. (This is enabled via ‘data labels’ which I’ll explain more on later, as it’s handy for other things too.)
Passive Intervention: A user is alerted or otherwise made aware that an authorisation has been granted, but does not need to take a specific action.
These interventions can be used in circumstances where the risk to data security is low, but there is still the possibility for bad or malicious app to cause inconvenience or moderate expense.
For example, writing
private data somewhere is low risk to the user’s security, but incurs a data storage cost.
So, if for example, there are per-App and Global spending limits, giving a soft, self dismissing alert to the user when a new App begins incurring this data storage cost the user is kept informed and able to make a direct intervention, without the need to interrupt their flow. Like this…
A passive intervention such as this would be expected and cause no alarm if you’ve just hit the edit button in a new app, but if you’re working away on something else, and suddenly a dozen new app notifications flash before you, then you know you have a problem.
Explicit, Just In Time Interventions: App usage is blocked until the user takes action.
This is for the actions where the risk is high, such as
Sharing data, or when an action might incur a significant expense.
A user is interrupted and asked to make a decision before they can proceed.
Here are some screens to give you an idea of how that is intended to work:
What you see above there, is a device we’re calling (internally, as it’s a useful metaphor) a ‘permissions manifest’. It’s something that’ll reappear a fair amount, and also the same device we’re exploring for granting access to other users.
I have actually made a YouTube Video walk-though on this that might be helpful, if you can put up with my monotone ramblings that is!
Upfront Permissions: Allowing a user to make decisions on what permissions an app has, ahead of time.
As I mentioned earlier, the in the interests of balancing total control, with experience friction, we think a sensible approach is with a mix of passive notifications and Just-in-time consent. However, some users might prefer (and indeed have requested) to be able to set, or just check, permissions for an app upfront, before it’s used the first time.
Or indeed a user might want to do this only for a specific set of data.
So that’s where a user could opt-in to upfront permissions. And then it would work thusly:
As I said, this sort of upfront intervention is unlikely to form part of the experience by default, as simply giving users lots of choices upfront may appear to give them more control, when in reality it may reduce it, due to ‘pop-up’ fatigue and choice paralysis. We should be aiming for a secure, low friction experience that gives users meaningful choices, so while upfront permissions could be part of the overall suite of tools available, they need to be treated with care in their deployment.
Setting Rules for Permissions
Something I go into on that video, but I notice is not something you’ve proposed in your thesis (unless missed it) was setting specific rules, for example a duration, to permissions. Is that down to a limitation of Solid, or out of scope?
Let me explain what I mean, and we can discuss its merits, and if it might also be useful when considering access for other users as well as apps.
When I’m granting a set of permissions for some data, naturally I can choose what actions the other user, or an app, are able to perform over for a specific set of data.
But at the same time for each, I’m able to specify rules, like so:
Here, I’m about to add a permission to
Publish to this manifest, but I’m including the rule
Ask Every Time which means I’ll be given a Just In Time permission request each time this app is used to publish any of my data.
Which would then look like this:
The rules we are working with for the moment are
Ask every time (as above)
Check first time,
Until I log out, or a specific duration like ‘3 hours’ etc.
This is for a user’s own app access, but the same kinds of patterns and rules could be extended to access for other users. For example, I give a friend permission to edit a file, but I have to give explicit consent each time they want to publish the revisions they’ve made.
Righty, I’m getting close to the character limit one replies here… so let me break things up into separate posts, more to on user permissions and data structures.
Love to hear your thoughts so far though. Did you consider any of these kinds of ideas, or test any?