When I first got interested in SAFE (2015ish?) apps ran as web apps on the network in a browser that did not have access to Clearnet. It was positioned as an Overlay network that provided a secure sandbox that included limiting the applications that ran on it to only being able to access the SAFE network. This simple technical constraint on apps was important to me, as it explicitly limited application developers to operating within the security and privacy restrictions intrinsic to the SAFE network.
It was shortly after this original alpha that Maidsafe re-architected the network into an API to support more platforms, etc, reorged to focus on the technology, and limited outreach to the community in terms of documentation, samples, etc.
Based on the status updates which I have still follow for all these years, it seems the SAFE network is now an API that programmer’s use when building application, and by virtue of supporting mobile and desktop applications (not just web apps that load off the network), it no longer restricts the application code to only access the SAFE network, and applications will no longer be explicitly constrained to the sandbox of the network?
Is this correct?
I don’t think it was ever only sandboxed browser apps.
AFAIK it was always going to also have a general API - you can’t stop this actually because the network is a protocol and anything using the protocol can access the network.
The point was though that if you used Safe Browser, anything running inside it would as you said, be unable to access the clearweb.
So AFAIK nothing has changed in this regard.
Well, at the time I remember Maidsafe was only supporting desktop due to the use of the browser (and I seem to remember a tray icon - it might have been a proxy app). A lot of the discussion in the forums at the time was around the need/and how to support mobile, and the difficulties of developing a Safe Browser for mobile how to integrate with mobile hardware features like cross-linking, notifications, etc.
At least from my recollection, it was only after that initial release that the status updates moved towards refactoring the core into Rust, using an external authenticator module (moving the API out of the browser), etc. At least from my perspective of what was presented on the website and the early sample apps and promotional messaging, things have changed a lot.
While there is no disputing that the technical accomplishments the SAFE network will deliver will be a leap forward in distributed computing, and decentralized network data security and storage, etc., IMHO I no longer believe the SAFE network will live up to its promise of providing a safe and secure overlay network, and that was something that I personally found very attractive about this project. At least for me, something important has been lost along the way.
It’s still an overlay network, that hasn’t changed. The apps you mention including the browser and the authenticator were tied to the previous iteration of the API which has since been completely overhauled. Once the network is stable they will be refactored.
I repeat, nothing has changed in terms of what you mention. Perhaps you were under a misapprehension. You can’t prevent an app from accessing a protocol, and that was always the case with SAFE.
When I first discovered Maidsafe it seemed pretty clear from the messaging that applications were intended to be run in a sandbox to ensure that all applications that ran on the SAFE network would require explicit user permission for ALL access to user identity properties and data access and storage (including being restricted from accessing the Clearnet). The messaging was pretty explicit that it would put power back in the hands of user’s instead of applications.
A protocol is by definition a set of rules, and those rules could include ensuring that applications are running in some sort runtime container/context, so saying the SAFE network is a protocol doesn’t address my inquiry at all.
If the SAFE network is not going to impose sandbox restrictions on apps such that they have to run in a context/container that limits their access to only user identity properties and data according to permissions granted, then I will reiterate my impression that along the way something has changed.
Once the SAFE network is complete, will all applications accessing the network be required to run in a secure sandboxed container or runtime context?
If the answer is now NO, applications will not be restricted to a sandbox environment, then the SAFE Network imho is no longer an overlay network, at least not in the way it was presented in the original messaging, it is just a decentralized data storage system. The difference matters!
I get that for you this is a difference, but what you thought was the case was never as far as I can see a possibility, so nothing has changed wrt Safe. You can’t set rules in the way you imagine as far as I can see, and certainly not in the way Safe was designed from the start - I know that because I was building apps early on.
I’m not saying what you imagine isn’t possible, but it wasn’t the case here so I’ve not thought much about it.
Sorry it looks like you imagined wrong. As @happybeing says this was never the case and you evidently misunderstood.
Thank you @happybeing @Southside and @JPL - this is the specific clarification I was looking for.
At the time I was referring to, SAFE applications could only run in the SAFE browser on desktop. The browser had been modified so it could only access SAFE endpoints and it restricted access to Clearnet. It is entirely possible that I inferred from the demo videos/materials demonstrating how apps worked in the browser, that this was a design principle of the network, rather than just the technical capability at the time.
This is kind of the genesis of why I submitted this post. As the project continued to evolve, I was never sure whether this was or was not going to be a principle moving forward.
From a technical standpoint, I suppose I was sort of envisioning that the SAFE SDK would include an app container runtime similar to Cordova, or React Native or Ionic Capacitor. SAFE applications would be bootstrapped into a program runtime container that limited their access to the SAFE API.
[EDIT: seems I responded too early. Oh well maybe this will help or not]
Yes always since the most initial network was MAID (Massive Array of Internet Disks) and it wasn’t even about apps or browsers etc.
But when you looked at the Internet, we say many applications on it such as eMail (SMTP), Newsgroups (NNTP), Time standard (NTP) and so on. Web Browsing came after, and Apps on Browsers came after.
In the same way we see Web pages are essentially files (or files on the fly - dynamic) and people saw that Safe (nee MAID) being storage could also store Browser Pages and run applications.
Before that applications were going to run on the PC natively using the APIs. Now the browser is seen as a simple way to have certain style of applications. Easier and quicker to develop. Now we have native applications and browser applications.
@arsnebula It would be better to say that the Safe Browser is an overlay on the Safe Network which uses the TCP/IP & UDP protocols for packet delivery.
Well I’m a bit thrown off. Shouldn’t self encryption on each chunk mean if the data is private but permission is given to read or write to that data that even if a third party were to reupload it for themselves that self encryption’s deduplication feature should prevent them from having any kind of ownership or control of those SE chunks? Even if unique data was added they should only have control over the unique data added.
Therefore if permission is revoked then the third party cannot see any new mutations etc and should still not be available to be uploaded or controlled by them on SN.
Of course said data could be stored and used as they please outside of SN.
Maybe in a future update to the network all data could be homomorphic where the third party apps/companies dealing with the data don’t actually get to see it? Caveat that only private data would ever get this treatment, public data is public data, perpetual, and fair game for collection and utilization by others.
I would think that when Fully Homomorphic Encryption is available it might even simplify the network structure a bit. May be wrong on that but it seems like hope is not entirely lost.
If there was a project that would vow to bring it into reality, there is no other project more aligned with those ideals than SN or Maidsafe.
That is my understanding for private data since the App is running on your device and using your account. Public data does not have ownership.
The App would need permissions to post public or private chunks. So we still have control over the actions of an App.
If native App, then as you say the App could store data elsewhere, and I can see the legit use for some of these apps. Like copy Safe file to your local storage.
Iirc it doesn’t work this way for private data. When they reupload it for themselves there would be no dedup because the encryption is different. They would now have their own unique copy.
We are still sorting this but it’s the data_map visibility that should matter IMO. So even private chunks are public. I am still on the fence over deleting in a CRDT type network. But we can have private data by just removing the visibility of or encrypting the data map.
I feel delete is one of those I need it things that we don’t need, but it’s a debate for sure. Different world, different requirements.
Okay, i see what you mean. Any addition or slight altercation changes the the chunks uniqueness in the cascade fashion SE uses when encrypting chunks with the previous.
Seems like that’d be a great problem to have solved.
One method for private data is simply encrypting with a modified self encryption where the user’s ID key(s) are added to the encryption key for each chunk. Thus making the resulting hash of the encrypted chunk different. Then deleting it is simple because only one ID is known to have created it and thus can be allowed to have full control over the “life” of the chunk.
[EDIT] Actually if memory serves correctly the network added the ID key using some function to the hash when the user is storing private data.
@dirvine I still firmly believe that if people are to truly have a device independent Safe experience then temporary files have to be allowed to be stored and removed from the Safe network. To not allow this means that a persons documents and files (including multimedia) will exist in one form or another many times forever. It is not unreasonable for multimedia temp files (if not deleted ever) to add up to multiple times the size of the final file.
If cannot delete private temporary files then the model of use of the Safe network will require that temporary files of a non trivial size be stored on the device. Otherwise rather than reducing the total disk space required for the world’s data will not be less on Safe but runs the risk of being more, even with dedup. (temp files many times do not have the same byte position for the data stored as edits are made)