“I have nothing to hide, why would I deny?” People don’t change easily…
You’re right, but a lot can be achieved through design features. On the current web most permissions screens are designed to just get you to click through and accept the default (privacy invading) options.
Of course, most people don’t care and will click through regardless, but a sizeable and growing minority now actively try to minimise how much they give away.
If Safe inverts the current practice by insisting that apps declare up front how data will be used and also making opting out the default, this will at the very least represent a big improvement and should, over time, reinforce the behavioural change. This may require a standard Safe agreement screen that all apps must use though. Not sure if that’s in the plans.
But again, by default, the data doesn’t go anywhere. This is quite a bit different to the clear net.
Shoehorning in clearnet style business practices—like is being posited by @loziniak —on the Safe Network, would be akin to hitting a ‘send’ button each time.
As such, there shouldn’t need to be data sharing agreement screens at all for adding capabilities, as no data is being shared.
Should data sharing specifically be requested, it’s a separate flow, and one that would warrant it.
I’m sure many people will continue giving their data to Megacorp, just like some will continue sending their money to televangelists and drinking bleach. Humans are weird that way. The point of the Safe Network is giving informed individuals the possibility not to.
From @JimCollinson 's post I determine two technical aspects for privacy that are under discussion.
First, the explicit act of ‘sharing’ data by the consumer with third parties. I accept this is private by default in the Safe Network.
Second, the explicit granting of application permissions or ‘capabilities’, which implicitly erode the consumers privacy.
It is the functionality of the latter (second) aspect that I wish to understand.
I reference the above document section ‘Capabilities & Permission: Screens & Flows’, it is a useful visualization of the Safe Network App (thanks to @Nigel).
In the flow example ‘App has access to all data’, the flow shows Safe Browser is granted access to ‘All Data’ giving capability to View, Create, Edit and Share files without further prompts.
What stops current clearnet type apps exploiting the ‘View’ & ‘Share’ capabilities to profit from consumer data? Is it inefficient to do so at scale?
Currently, where a clearnet app is denied a capability because it compromises the users privacy it can refuse to load - even though the requested capability has no bearing on the functionality sought by the consumer.
This is why I advocate capability spoofing - it de-incentivizes mass data access and protects the consumer. Megacorp request unnecessary access to all capabilities for ‘All Data’ - gets returned random pictures of cats, the consumer gets functionality.
I like this idea, poisoning the data that is mass collected. Google hates me because I do it to them often. Maps often thinks I am living in another state by spoofing GPS etc
The issue is still how does one identify when an App is overreaching, and harder how does the client identify this situation where there is overreach.
The user does not know what the App is doing with your data if you give permission for the App to store to data objects not owned by you.
The client does not know what the App is actually doing and also doesn’t know what your data is that is being written to a data object not owned by you. It could be your address, bank details etc OR it could be a moves in a chess game.
This is why APPs are encouraged to be made open sourced, and thus can be audited by knowledgable individuals. Also allows a rating system to be built into App stores that describe what is being done with any data you allow to be stored in objects not owned by you.
A shopping App can then be checked to make sure the data is going to where it should be. Or a Maps App only gives anon traffic data (data with no ID or an ID that is for the specific trip and only for main roads say)
Maybe there are some words other than ‘capabilities’ and ‘permissions’ that can set the marketing/educational tone? For example : ‘exploits’ and ‘manipulations’.
This will help newbs understand that apps shouldn’t be asking for so much from them.
User education is a factor. Though I would say that consumers already understand that apps shouldn’t be asking so much from them, trading privacy for functionality. Megacorp hides behind this, referring to it as consumer ‘choice’.
Does the Safe Network address this aspect of consumer privacy?
Looked at the Safe Network App MVE Screens again (sections ‘Setting default capabilities’, ‘Adding exceptions to Defaults’ & ‘User wants to restrict capabilities for certain data’). Though based on my own assumptions, I am beginning to see how this is at least partially addressed in Safe Network app.
Sure, when the choices are either becoming the next Richard Stallman or selling your soul to the app, it isn’t really a choice. Normal people don’t want to become hermits, so there is only one realistic choice: giving up control of your personal data to access the service.
So by definition there is zero control of personal data because you agree to lose control over your personal data.
But that doesn’t need to be like that, the Safe Network breaks that dichotomy of being either a hermit or a data whore, you don’t need to be neither.
Data portability is part of the basic design of the network, you would be able to revoke permissions anytime to any app and at that moment the app would cease to have any data about you. That is an impossibility in the current internet.
If you decide to close your Facebook account and request deletion of your data, they will laugh at your face. You would get your account closed, but instead of deleting your information they would instead flag your data as “deleted”, and there is zero recourse to ask your data wiped out.
So even if you reject their terms and conditions after they update it, they will hold to your data. Where is the control there? It is completely out of your hands.
On the Safe Network, if you detected an abuse or something weird with an app, you revoke permissions to your data and it is over. That is control.
The Safe Network app restricts an app’s capabilities on data by default.
It does this by virtue of the default ‘Standard’ setting (1. in pic below) which allows an app ‘Create, View and Edit’ capabilities on ‘All Data’, but does not allow ‘Share’ or ‘Publish’ capabilities on ‘All Data’ (2. in pic below)
An app can only ‘Share’ or ‘Publish’ its own ‘Individual App Data’ if the user authorizes it. When the default ‘Standard’ setting is used, these capabilities are set to ‘Ask me everytime’ for ‘Individual App Data’. (3. in pic below).
Having answered my own question, I welcome correction.
unless they’ve copied it.
This is close, but not quite what the screens are detailing.
There are a number of nuanced options that giving control over what capabilities apps are given upfront, by default, at what time, and how often the user is given the opportunity to intervene. We give people a little bit of a helping hand pre-configuring these settings during onboarding, that’s what this first screen shows:
It allows people to choose from a couple of presets for new apps.
The standard setting means that a user can just start using a new app to create data as well as view and edit data, but only data which has been created via that app: that app’s data is ring-fenced by virtue of the unique data label that is applied to any data that passes through that app’s hands.
The user can do all this uninterrupted, and without being prompted via a dialogue to confirm these capabilities.
They will, however, be prompted to confirm each time an app is used to publish data, or share this data in any way (including to the app developer themselves).
This is detailed in the capability manifest screen below. The user can edit and make changes to these capabilities, or the rules applied to them for this individual app, or indeed all new apps, but these are the ones set up for them for new apps as the default in the Standard profile.
However, the capabilities granted to new apps over existing data is slightly different, as we can see here:
New apps aren’t granted any Share or Publish capabilities over existing data, and in order to use an app to create, edit, or view existing data, then you’ll be prompted to double check the first time to confirm; at which point you can tune these settings to your liking.
The full control profile means upfront permission is required for all capabilities for all apps, regardless. Ostensibly this gives more control, therefore more security—and it’s been requested—but in reality we feel it may actually prove more risky for users due to permission fatigue, and the standard profile provides the best balance of getting the users attention when risks present themselves: i.e. when an app is being used to share data or allowing others to access it.
So just to reiterate, the Safe Network isn’t just giving more fine grained permissions over existing clearnet data models, it’s rebuilding the model of what an app is in relation to data. And that’s one reason we chose to call them capabilities not permissions; because they are subtly different. The ClientManagers give control at a network level over what is going on with a user’s data, (and web-app API calls ultimately have to pass through the Safe Browser as well, so we have visibility/control there too) and that gives us the potential to create an environment that is much more akin to an air-gapped computer, than simply just the existing model but with more permission switches.