My question is:
Why are apps allowed to disguise themselves as others and how is this not a security breach?
My question is:
Why are apps allowed to disguise themselves as others and how is this not a security breach?
Thanks for posting that @jm5, it’s a discussion I’ve been wanting to start since I coded the SafeEditor. I’ll answer with a question of my own: Why does an app needs to disguise itself in order to access the data of another app?
I agree with you that the way it is currently done doesn’t feels right. I like the proposal @happybeing made in the other thread that apps doesn’t have a folder dedicated to them instead they just ask permission to use any folder they need.
If you tried the SafeEditor to edit the files uploaded through the demo app I think you would see just how unique and powerful that is. I think this is something Safe should fully embrace and work toward a security scheme that truly supports it.
Thanks for replying. I feel there must be something I am missing here. You wrote the SafeEditor which runs through a browser. On the top left there are fields already filled out [which can be retyped] with an Authorize button underneath. Once I click that authorize button the app descriptions typed in the fields are allowed to access files which are permissible to the app your app is disguising itself as.
I see this as opening up vulnerabilities.
How hard would it be for you to put within the SafeEditor “hidden” fields already filled out and that describe for example my SafeStore? I click Authorise giving both apps permission, in my Safe Launcher I now have 2 SafeStore apps but I have no idea why. I can remove both apps and reauthorise my original store but perhaps the “hidden” one has already emptied my funds.
I must be wrong about this… what am I missing?
The current WIP Project Decorum app uses multiple authorisations. One for general app data, and then one more authorisation for each publicID you choose to use. This way each publicID has it’s own private addressbook, conversation history, wallet, etc. You’re prompted for such an authorisation whenever you “login” with (select) the corresponding ID. The idea behind this scheme is that other apps can also ask authorisation to access the data of a particular publicID, since Project Decorum is meant to be a platform, not one app.
So I consider this possibility a feature, but I agree that it should be more clear to the user which app is asking for permission. However we do it, we should try not to spam the user with authorisation requests to avoid making it a habit to accept without reading anything at all.
Actually I could send authorization request without asking you to press a button in my app. Theoretically I could have a list of app credentials and at random intervals send a request hoping you would give me access by mistake. Actually the situation is worst then that. With the new low level API, which is awesome, I can ask you permission to access it to simply post a comment on a blog and totally wreak havoc in all your data, which is much less awesome.
So right now you need to be very careful about what app / safesite you use. It’s a good thing we are just in Alpha 1 and there isn’t much point yet for bad actor to try to compromise your data.
It does feels like crossing a mine-field and the launcher will have to step up its game to make it feels safer to navigate on Safe. I just don’t think sandboxing app’s data is the solution.
Here’s a few things the launcher could do.
In short, I agree with you but I think people need to learn to become diligent and the Safe launcher needs to help them avoid costly mistakes.
OK, I feel better that I am actually seeing vunerabilites here and I am not mistaken. From some of the reactions on my comments I thought I was being ignorant. It seems the vulnerabilities are bigger than I thought.
If this “disguising” is a [quote=“Seneca, post:4, topic:11419”]
[/quote] without the vulnerabilities then I would feel more safe and actually happy about it. But, I must say calling it “disguising” creates all kinds of questions in my head. From @digipl 's [comment] (SafeEditor MVP, edit your safe files directly from your browser) it seems (as i would of thought) that MaidSafe is aware of this issue (which is way more technical than I understand)
Perhaps @DavidMtl could explain why he thinks sandboxing app’s data isn’t the best solution and what other possibilities are available.
It is exactly this, the Safe Launcher needs to educate people if it wants to be successful. But as @Seneca says [quote=“Seneca, post:4, topic:11419”]
we should try not to spam the user
The original idea of APPs is that, usually, was stored in the Safe network as immutable data. In this way we ensure that the App has not been modified. That would allow us to give general permits and only request authorization when we run a new version.
I think this has to be improved to help the user to consciously managed their apps and data, but this is not totally undesirable I believe.
It’s similar to your phone permissions policy, you have several apps that are capable of making calls and accessing your contacts list, e.g. in Android you explicitly authorize each of them to access different type of data when you install the app (I’m sure many users don’t even know they do that). Thus this type of data is yours and you need to decide which app you want to use to manage it with, as there are/will be many apps/tools to do it.
I do agree that this needs to be made very clear to the user, perhaps even more than how it currently works in cell phones.
Yeah its just like on a pc. Seems like the original vision of an app is out of phase with what it became.
But on a pc or on a phone, apps cannot disguise themselves as others can they? This keeps reminding me of a popup disguised to look like a Flash update that when clicked on installs a virus.
It seems clear to me that the current launcher proves the concept, but there is more work to be done in this area.
Keeping a record of which apps have requested an authorisation token, then assuming this will persist for the lifetime of the app may prove useful. This way, if something else attempts to re-auth, it will flag as unusual activity and warn the user thoroughly. If would also mean auth requests would be rare and would therefore demand more attention from the user.
I also think that the structured data access needs careful consideration. If all security can be circumvented because an app needs some sort of low level feature, then it severely limits security. It would seem to me that some sort of guard needs to wrap these structured types to prevent meddling.
I am sure all of this is in hand and the current openness is purely to allow people to see what the safe network is capable of.
This was my understanding too. Even if the “meta data” associated with the APP doesn’t hold a key a hash could be done to ensure that the APP is what it claims to be and the hash+(other info) is what the permissions are associated with.
Even a version change of an APP should require an update of permissions, even if it still uses the same APP keys.
Also if an APP has its own version of the launcher built in then what is to prevent it from accessing the network without asking permissions off the user? So whatever mechanism one uses to start an APP has to remember which APPs the user trusts (permissions) before it runs the APP so a new APP can be flagged and/or sandboxed.
Remember though that at the moment this sort of security is not expected to be built in as we’ve been told in the earlier tests.
For versioning of apps as immutable data to work (eg with web apps which are multiple files) I think we would need versioning of directories which was recently rejected in the RFC process in favor of versioning of files.
Also, @Viv - not sure if you are aware of this topic which is relevant to our ongoing github discussion about launcher, sandboxing, impersonation etc.
Retrieving ImmutData from the network to validate the app’s binaries/resources itself can occur even without version dirs or versioned files even. Just need ImmutData for it or if SD, the hash check would have to be kept elsewhere to confirm the retrieved content hasnt changed. Versioning can be seen as a convenience at that point almost and shouldn’t be a requirement.
This certainly is an interesting thread and I think there are quite a few points to take and consider in detail from this as a combination rather than 1 answer fixes all.
Some general points mentioned such as “Launcher highlighting-to/educating the user the importance of allowing an app access” vs “not overloading the user with too much where the user results to just spam approve stuff” is a tricky but essential balance.
In the current scenario launcher pretty much does barely any fingerprinting and leaves it all upto the user to decide which is what raises the bunch of questions of what if the user was misled by any means. This certainly needs looked at too.
App based sandboxing I see as an important feature, just personal pov. Without it dont see a decent way for one app to store info which is private to itself and the user. Ofc user has the ultimate authority to remove this app and its data too, so not talking abt not being able to do that. Just one app impersonating another.
Even with the low level API app’s use their own specific encryption keys, so content stored via one app cannot be “understood” by another app. However since all apps use the same signing key of the user, the second app can just mutate the data and thereby cause the problem. To fix this(an example) the network could allow an account to have sub-signing keys or sorts which the vaults will allow to mutate data with but the users main signing key can ultimately override ofc. This way even via the low level API, apps cannot understand nor mutate data stored by a different app. This is an approach that has been raised and discussed. Just hasnt been finalised and made its way to impl yet.
Secondly one app impersonating another. Previously this was a limited scope when each app had a dedicated channel via which alone the launcher would communicate by and that way the launcher also knew when that app say went offline. With the client-server model detecting presence gets a bit more tricky unless it was again websockets or something which was used / heavy pings to detect presence with challenges(cumbersome).
Launcher could fingerprint in a more verbose manner say via token validations. so App asks launcher for approval, user chooses to provide access. Now launcher provides user a token say “ABCD” and asks user to input that back to the app and when the launcher receives that from the app, it trusts its validity. This ofc plays a lot better with a dedicated channel for launcher to detect when the app has terminated and remove this token / cleanup accordingly. Negatives for such an approach ties to that general point balance of now user needing to shuffle between these two things to approve an app and so on and how well/secure is it when there isnt a dedicated channel(maybe its ok without one but needs confirmed).
Now while these points (extended fingerprinting, dedicated comms channels, low level api and also general apps using their own sign keys as sub keys of user sign keys, launcher educating user abt importance of app authorisations, …) are all valid points and I’d certainly vote for some/all of them and maybe more too after considerations, in this specific case, I’d think none of this would have helped what occured here.
Even with all these measures, the app here
SafeEditor(neat app btw, lol soz I’m using this to explain what I mean) made it clear to the user it intends to impersonate another app AND the user was “ok with that”. In SAFE the user always becomes the ultimate authority and if the user chooses to be ok with one app impersonating another, not much I see standing in the way after that.
Now ofc, user was ok with this app impersonating another cos it promised to give functionality the user did not otherwise have and the user chose to trust it cos well @DavidMtl aint a bad guy and its just impersonating demo app with not that secure info or something. Is that always going to be the case though?
So say if I built an app and asked users to allow my app to impersonate their safecoin wallet app with the promise of providing better performance, users might not be so willing, if i did promise free safecoin, some prolly do take the gamble and if they loose what they have, would rage quite hard. Again I’m not saying this to indicate we dont need any fingerprinting, I think we need quite a few bits as mentioned above but there prolly is another dynamic here too in terms of should the user be the ultimate authority. Personally I feel the user should be cos if not just as mentioned in this thread, I can build an app that pretty much takes the users credential from him and in this case since the user gives that to me willingly, I’ve got access to everything and game over.
What? Take the authority away from the user?
What are you saying?
err the opposite is what I’m saying. That ultimately the user decides.
I guess I wasn’t too clear, What I was trying to get to was “we need better fingerprinting and approaches such as”:
but thats not going to answer this specific case where “user chose to intentionally allow one app to impersonate another”. I think user should be the ultimate authority and in that principle regardless of the checks in place, user can always choose to override and go around it.
Sorry, I think there was just one line in there somewhere that confused me
Ah no problem at all. Its prolly just me typing an essay into a post and mucking stuff up lol. Now hopefully its clear enough for future readers
This was the one.
Maybe I just read it out of context?
Thanks @Viv, a lot to take in. I hope I’m going to summarise accurately here:
I agree with this, except that you have skipped over the issue which kicked off this whole discussion. It wasn’t SafeEditor being a nice App that the user knowingly decides to authorise as SAFE Demo App - you do address this!
No, it was SafeEditor that alerted @jm5 to the possibility of any an App impersonating any other App and the user accidentally authorising the fake-app. And it was to address this that I made my suggestion, that by switching from authorising an app, to authorising access to particularly data buckets, we could make it more apparent to the user what they are granting access, and much harder for them to do something they don’t want, or don’t understand the implications of. This wouldn’t address repetitive authorisation fatigue though, see below.
So in your wallet thief App example, it is not going to be obvious to even an attentive user, that Fake-Wallet-Thief-App is asking for authorisation. Because Fake-Wallet-Thief-App will not be displayed by launcher. Launcher will say Trusty-Old-Friend-Wallet wants access, and the user only has to click “Allow” as a reflex - just once ever - to lose the content of that wallet.
My data-bucket suggestion isn’t perfect, so I’m not arguing for it as is. I’m looking for a good overall solution to this and that was just one first thought. I’ve never been comfortable with the UX of launcher, but I think it is a hard problem as you’ve noted, and I really don’t know what is going to work best.
Another issue I would like to address is having to authorise lots of apps, and every time they are run. Because that is both tedious and causes the user to authorise by reflex, not read or understand, and not make conscious choices which would be terrible for a security model which relies on the user being the authority (which is of course our aim). I don’t want to derail this topic, but to note the issue because I think it ties together with any discussion of launcher UX. Perhaps some kind of persistent app authorisation can be devised - which apps cannot spy on - like a password manager inside SAFE Beaker, or for each desktop, is one option, but I don’t know enough to say how it would be implemented (or if it is feasible).
My suggestion at this point is that by storing authorisation, so apps only need request access when first used on a particular desktop/device, it becomes much clearer to the user if an app attempts to fake a request for access - and which app is doing that - and to what. If we can’t achieve this, I think we’re going to find it hard to come up with a launcher UX that is either secure (helps user maintain conscious authority) or not a pain in the ass (nagging for authorisation).