Apps disguising themselves as other apps

I thought we were here trying to create a free and secure new system, not to fall into the same old mistakes. Is useless that data are encrypted and dispersed if any application can access them by a simple click error.

At the end any developer should know to whom it is addressed their product and if we want a Safe network for anyone we must follow two rules, be simple and be safe.

1 Like

I agree, and you and I share the same aims in this respect I believe. My comment was a response to the extreme language you used. I think “suicide” is too strong. Something we want to improve on as best we can, certainly.

As a developer, I can tell you there is plenty of places an App writes and stores data, that if it wasn’t protected, I’d have to consider my app compromised. Settings/configuration are the obvious starting area, but anything that is “app internal” also falls under that, like, an internal cache, or sqlite database with an index of or data I have downloaded. Those all also exist on mobile and are used heavily. And weren’t those apps sandboxed by the system, this would all fall apart. (Also simply because you can’t access an sqlite database safely from more than once thread at a time.)

I do think there is very reasonable case for giving apps at least a configuration-type of folder to store this kind of information. And from the system protect them from especially accidental access, so that you one app doesn’t f*-up another app by messing with its internal data. That would only work if you can safely distinguish apps and app-disguising can be largely prevented.

Of course all this data is still accessible to the user themselves outside of the sandboxing mechanism (also because you sometimes just need to delete it – ehem). But it shouldn’t be shared between different apps. There is a reason we have different databases for different apps.

This is mostly about internal data, like metadata, indexes for performance and alike. And I do agree we need to find a mechanism to allow easy access to the actual content across apps. I like @Viv’s idea of “containers” here. Think of how many modern operating systems organise all Pictures in the “Photos”-Folder (which some then even localise, like Mac) and all Photo-Related Apps “know” about that. In the case of the sandboxed mobile, apps even have to explicitly request permissions to these on a case-to-case-bases.

I’ve argued before (and will here again) that permissions need to be more granular. Giving an app access “to a folder” still means it can delete all files in there without asking – or re-arrange the structure and I can’t find anything anymore (and thus render indexes of other apps incompatible). I think we are limiting ourselves largely by thinking of these in terms of ‘files’ and ‘folders’, when there is no need to. What if we instead think of these as “containers” (as a specific data type) with certain features, sub-items and protections. As an example, the “Photos Container” could have a feature allowing an app to “request picture” – that the user triggers in the email app by saying they want to attach a photo. The user the selects the photo from within the “launcher” (or whatever) and the “Photo Container” delivers the selected photos to the app. The App never had any read/write/whatsoever permissions to the container itself or the data it contains.

This is a very common use case on mobile phones today. MANY of them don’t even expose any file system operations to any apps NOR THE USER ever. That there is a filesystem under it, is more a relict of the history of how to organise hard drives than it is needed or appropriate for the use cases. They want to get rid of these structures and their bounds for a while, why would we bound ourselves into them when we don’t even have them in the first place?


Hi @lightyear. I agree in large part with what you are saying.

I agree that we need to find the “safeway” of doing things and not just settle for what we are used to. Folder hierarchies, containers, buckets, let’s find the best one that fits the best with the unique nature of Safe.

I agree that permissions needs to be granular and sophisticated enough to give all the power in the hand of the user.

I also agree that the launcher needs to provide a robust protection against accidental or unsolicited tampering of any files: config files, indexes, databases, secret sauce recipe, etc.

These I disagree. Developer should be allowed to wish, hope and pray their user doesn’t explicitly grant access to other apps but should never be allowed to forbid doing so.

I understand iOS went that way. But iOS is not an open ecosystem, Safe is.

In short, protect user against himself, but give him all the power he wishes for if he explicitly asks for it.


Don’t get me wrong, I am totally in favour of ensuring that the user can always access all data stored within or under their credentials. And I am also very much in favour of having some kind of all-access-style browser that allows the user to do that. And with that, heck if the user wanted to copy any data out from one sandboxed app-internal stuff to that of another app, sure, let them do that. What I am arguing against though, is to make that a general pattern for all apps.

Let’s take the assumed case that all data an app was writing by default was encrypted with a local key for that app (accessible to the user, and through that browser with special permissions but not to other apps nor the public). If that area wouldn’t allow unencrypted data to be saved there ever, the most common use cases for what I’d consider “shared” data would make much sense to have hosted in there: no DNS, no shared folders or images. The main purpose of this area was to have an app write in their own area, while all data that’d be shared between apps or needs to public accessible (like my images folder or the website) would be in a “commons” area, seperate from the App (similar as the SAFE Drive is now). Now, then, what’s the proposed use case for another app to go into apps local area and write stuff?

I am an app developer. I’ve been developing system on servers and client computers for over a decade now. I, as well as any other sane developer or devOps person, always secured our postgres database with an app unique user and password – as does wordpress – for one simple reason: every app relies on specific structures and data to be of specific integrity. From whatsapps/telegram/signals sqlite database on the phone (for the local cache of your messages) to firefox profiles on the desktop, the structure of the data saved there is “sacred”. It’s hard enough to ensure you with your own app aren’t messing this up (google “database migrations” to learn horror stories) beause if you mess with that you very well not only render the app useless at the next start but could easily destroy all of the users data indefinetely. This core separation is often referred to as “multi tenancy” and a default feature of any database (from mysql, postgresql to couchdb)

If, as an app developer, I can’t resonably trust that the database I have access to has been written and curated by my app, I give you zero gurantee what this might does to anything in your system. If you ever accidentially installed two different apps into the same database (or just the same in different version twice), you know the mess that this makes and that it is near impossible to recover from. There is an incredible amount of implied things in apps that make them work, like doing things in a certain order or relying that if a certain file is there, we have been booting before (and not any other app).

The idea I propose here is prevent this pattern – like any reasonable database does: To prevent apps accidentially messing with one another and even more than that protect the user from doing that accidentially to themselves. Again, I am not arguing the user shouldn’t have full control of that data, including to delete it. But having apps explicitly peak into another apps internal data and mess with it is certiainly a recipe for disaster. I wouldn’t be developing for a platform that encourages that type of action, because my app would constantly be blamed for doing things I have no control over – other than erasing everything at every startup.

In fact, it would rather be very likely, should I really want to protect my data (for example because I am trying to hide something from the user) that I’d ship my own crypto and secure the data before I am even saving any to the network. Including with signatures that I can verify that the files haven’t been touched by any other than my very app – and some apps (like signal) most certainly already do that. No underlying system will ever be able to prevent that. So if the system doesn’t provide integrity, then this would be the only way to be sure. I’d rather have a system that I can trust though.

Coming back the actual use case discussed here: the safe editor and the SAFEDemo App. I’d argue that the SAFEDemo app storing a publicly available website (through DNS) in their own sandbox is the actual problem – and an antipattern. It should be stored in the public area, so any other app (including the SAFEDemo App) could gain legitimate access to that and change it (on the users behalf). So, if the system would prevent this – app storing any uncrypted information inside its sandbox – wouldn’t the problem be solved?


I put that point earler, to @Viv and he explained…

So the reason for sandboxing was precisely to stop the user messing with the data and accidentally breaking the service that they created. This both makes sense, and would annoy those users who are aware of these risks and want the extra control and would take responsibility if they mess up - as opposed to users who might not realise and would be unhappy the app allowed them the freedom to mess up.

I think the Demo App has been very helpful here. It has both helped us recognise the importance of these issues, and provided a really good edge case where what would best serve the user depends on the user’s understanding and competence. Perhaps it’s one of those where the user has to ask for the upload to be in a shared space as an “Advanced setting”, while the default would be to play “safe” :slight_smile:


For the problem of identifying if an app is legit, we’ll need a more reliable method than asking the user to okay auth requests for installs and updates every single time. People (including myself) will end up clicking yes without thinking, and victim blaming is not a solution for that.

How about delegating the task of validating apps to a set of trusted peers? Over time, as in any social networks, there will be people whose opinion becomes more trusted; people would know that if they say something is okay, then it’s probably okay. Again, I’m not talking about a top-down appointment of who’s trustworthy and who isn’t, but something that will inevitably and informally happen over time. So, why not use it?

Users could specify among their various settings (that the Launcher would load upon login) that they trust certain others (whom they should be is their choice to make) to decide which apps (app versions, really) are trustworthy, and which are not. It would mean no more “do you authorize” questions for most users, and a “X, Y, and Z endorsed this app; would you like to authorize it?” kind of question for the rest who are more paranoid.

1 Like

I agree with you. I was looking earlier at my safenotes through the SafeEditor and realized that the app (SafeNote) would probably crash if I were to change the data of a safenote without being careful because its content is written in json. So I understand the risk. I should add a few checks in my app to gracefully fail instead of crashing… But I do get it.

It’s a potential recipe for disaster but not a certainty. I didn’t break my app because I knew what I was doing. I was careful and everything was fine. I understand another user with less experience could screw this up. I agree we should not allow this to happen as a default behavior.

The problem with this is that most developer will store all their data inside the private folder, like I did with Safenote and like they did for the demo app (private subfolder). And this is even more true considering the incoming app reward. That’s a big incentives to lock your data away from other apps. And then we are back to square one, apps will need to disguise themselves when the user wants to grant them access to another app’s data.

That’s what I’m trying to get at with the “hidden” feature. All apps are hidden by default. Another app cannot know the existence of an hidden app or ask permission to access its content when its hidden. Place the checkbox inside an advanced tab with all the proper warnings and safeguards.

Of course some people will find a way to mess up. But it’s no different then the project you describe. As long as the database runs locally, protecting it with a password doesn’t prevent a user or another app from tampering and messing around with the file itself.

I think all in all it gives a much more interesting ecosystem. There’s gonna be the occasional screw ups but I think the pros are well worth the cons.


Do you guys know that Java based trojans are still a plague out there? It requires the user to manually accept the plugin to get infected, and guess what, millions are still getting infected with interaction from the user.
If you are in doubt, and don’t have a specific exploit for a browser, just add a java based exploit and boom, you are in. Just give them some incentive to click it: it is either the picture of a naked celebrity, the funny video of a cat dancing, or the leaked payroll excel from the HR department. It doesn’t matter what it is, “common people” will ALWAYS fall. I often do it in my pentests to see what’s the lowest common denominator, and even Ms-Office macros are executed, even though you have to go through two or three warning screens from Microsoft Office.

What about the permissions that you have to authorize on Android? Does anybody read the list of permissions that the APP needs? Does anybody read the pervasive abuse of permissions that a simple game is asking? No, nobody cares, nobody understands, and android based botnets are the rage… delivered from the Play Store.

So the message is simple: if you allow social engineering attacks easy to execute, IT WILL BE EXECUTED.
I am happy that @happybeing mentioned the “authorisation fatigue”, this is crucial. Even if it isn’t desensitization for repetitive exposure, people don’t read, period.

So, people, if we are going to build the new internet, we better build it to be SAFE FROM THEMSELVES.
How can we achieve that? Well lets see an example, what about invalid SSL certificates, guess what? People accepted any certificate because they didn’t understand the error itself, and they simply want to get back to the website they were browsing. SSL MITM attacks were a freaking joke, it almost always worked… until… well, lets see what Chrome and Firefox did.

They went from this:

To this:

And from that to this:

So what changed?

  1. The first one was completely absolutely ineffective, NOBODY EVER READ OR UNDERSTOOD THE POPPING DIALOG, and everyone clicked on YES.
  2. Then we evolved to the second one, which is more scary and yet there was a button that said “proceed anyway” and people naturally pressed on it, because, again, nobody cared.
  3. Then lastly, they hid the “proceed anyway” option only for the ADVANCED users, the commoners would click the only seemingly option out there which is not “cancel” or “ok”, but a comforting colored button that says “BACK TO SAFETY”.

Another thing to notice is the evolution of the error messages, the third iteration was straight to the point and with no verbosity at all, a single sentence: “Attackers might be stealing your information, PERIOD”, now that is something I understand! I want to go back to safety now!!
Yes, we techies know that it MIGHT not be necessarily the case, it could be a misconfigured server, an expired certificate, a dns change, whatever. But you have to consider that your audience are grandmas, that’s the lowest common denominator, and when you simplify the messages you have the err to the side of caution BY DEFAULT.

Oh BTW, on Firefox it is even better.
SSL Error Screen on Firefox:

Okay, so the common user will definitely click on “back to safety” because it is the only option available. But those who are curious might click on “Advanced” just to see if there is a “Proceed anyway” option hidden…

By now, normal “curious” power users are scared away and left, but those who really know their stuff would click on Add Exception and see the certificate.

And it forces you to add a security exception, which for the common user is way less explicit than “proceed anyway”, then you have to review the certificate AND THEN “Confirm the Security Exception”… which is enough technicality to scare away an average user. You also have to make three clicks to add the exception and go to the site anyway, which also increases the effort and the cost of the action do do it.

My point is: blocking suspicious or risky activities must be a default, and for the common users there shouldn’t be an option at all to override it. Only the “Advanced” users should have a special menu, a little cumbersome to override it to make it sure that if they want to go through the trouble they know exactly what they are doing. A simple dialog warning will never be enough. If it were up to me, I wouldn’t even allow the “Add exception” button to be enabled until the user clicked and scrolled the actual certificate completely.


I couldn’t agree more: Advanced (or potentially harmful) features should be hard to reach. If they are easy to reach anyone will (ab)-use them. But if we make you jump through hoops (like having to open the safe-explorer-app and copy-paste the content from one app directory into another, then do your changes from within the other and copy them back), it is less likely that you do it at all.

I was asking for the use case because if something is needed often and common place, then we need to find a good API/mechanic to allow for that. Like, in that case (only!) I could see a highly classified permission, that opens a special big red “App SafeEditor wants to access the Data from SAFEDemo-App”-dialog giving the app one time read access to the data to export for example. But I am arguing that shouldn’t be the default or expected behaviour, but rather that apps share on their own terms in the “common area”. So, this isn’t something we need/should solve on the architectural level but rather through later added features IF they show up to be needed.


I think it’s gonna be quite common. We already have two use case (SafeEditor+DemoApp, SafeEditor+SafeNote). By default app dev will want to lock all their data. The common folder won’t be used. Saying it’s antipattern won’t matter.

Also keep in mind that if it is too restrictive or too complicated, developer will be back at disguising their app as the default pattern. This is a much worst situation. So a balance needs to be found.

How about adding a command line parameter to unlock it. Safe_launcher -advanced. Running the launcher in default mode would simply hide all apps by default and ignore any request to give access to other apps data. Or maybe it’s a option in an advanced tab to activate the feature.

Maybe allow data to be read (if app is not hidden, etc.) and allow apps to ask for write permission only in advanced mode.

1 Like

How about adding a setting here I coud say something like “if at least 2 of @DavidMtl, @viv, @lightyear, and @piluso trusts this app, and non of them mistrusts it, then trust and authorize it” and “ask me otherwise” or “silently ignore request otherwise”?


Agreed, we could have some kind of list of trusted apps. MaidSafe could even run their own list as a service and developer would pay them to audit and approve their code.


Reputation based whitelisting? Interesting


I’m thinking more along the lines of something less centralized, which I already explained above: your list of trusted users depends on you, but of course there would be well-known trustworthy people (review sites, developers, testers) that most people would pick from among.

To clarify that, I mean reputation as a human concept, not something measured by technological means: you pick whom you trust, and nobody can argue with that.

I believe it would be wrong to diss private app space just because some apps could use it to lock in data. Why not use a low quota to discourage that use, and let users decide if they are bothered by an app’s behavior by electing to not use it?

Authorizing access to another app’s private space should be as complicated as @piluso described it above, but maybe even “impossible” without having to deal with gory details like raw block addresses and keys and such.


Yeah that’s what I meant.

I’m not against the idea of private app space. My point is that ultimately the decision should be in the hand of the user.

How hard it should be can be seen on a spectrum. One side is a free access for everything, the other is making it impossible. Both extreme are problematic, the solution is somewhere in the middle.


I believe accessing another app’s settings should belong to the “near impossible” category: you would need to copy/paste raw block addresses and keys and such.

However, we could (I think should) have app specific document folders as well:

  • CameraAppA requests write access to Photos, so it gets a writable folder within there.
  • CameraAppB, similarly.

CameraAppA can’t access pictures taken by CameraAppB, and vica versa, but they both can save pictures.

  • ViewerApp could not see anything by default, so it needs to request read access to the main Photos folder. We say that’s cool, and now it can read pictures taken by both of the photo apps.
  • PhotoEditorApp does similarly, but it also requests a writeable folder so it can save the edited pictures.

To avoid @happybeing’s Repetitive Authorisation Fatigue, or just simple mistakes, these permission requests would be taken care of by the reputation based delegation of the authorization. If enough of my favorite power users said “this app is cool” and then listed the accesses it should be given, then my launcher would do so without asking me.

On the contrary I believe accessing another app’s setting should be a perfectly valid use case.

Let me introduce you to “The Master Configurator 2000”, known as TMC2K. Well TMC2K allows you to manipulate other apps settings streamlined under a very elegant and user friendly UI. TMC2K also has cool features like making backup of your settings, restoring old one and even the ability to share them with other users. The super neat feature of TMC2K is that you can also automatically sync your settings with the one of another user making the job of system admin much easier.

TMC2K is of course a fictional app, I came up with it 5 minutes ago but I think it shows well why you can’t take for granted that some data should never be accessible to other apps.

What’s great is that the other developers don’t need to be aware that TMC2K even exist. What’s also great for the TMC2K developers is that most apps on Safe are open-source which means it’s fairly easy for them to keep their code in sync with other apps so they don’t break them.

Why not also trusting them to list apps that manipulate other app’s data? If your favorite power user says TMC2K is good, why not also trusting their judgement on this kind of app too?


You certainly make a good point here.

1 Like

Because, unfortunately, some users will made a mistake and download TMC2Q instead of TMC2K and their data or/and safecoin will be corrupted or disappear. Of course this users will blame to the world, with some truth, that the Safe network is a unsafe s**t.

I’m not against this possibility but, as I said above, this must be exceptional and extremely difficult to make.

1 Like