SAFE URL: "safe://" cross browser support revisited!

why do you think this would be ‘firefox’ only? plugins and extensions can be cross-browser developed.

1 Like

because chrome doesn’t support custom protocols (so safe: isn’t possible - the only way chrome would work is with a https://www.*. link)

I think that is incorrect. See here:

Basically, the plugin just registers the protocol on install so the link is forwarded to it … just as “torrent:” is forwarded to your torrent application.

Chrome 13 now supports the navigator.registerProtocolHandler API. For example,

navigator.registerProtocolHandler(
‘web+custom’, ‘http://example.com/rph?q=%s’, ‘My App’);

Note that your protocol name has to start with web+, with a few exceptions for common ones (like mailto, etc). For more details, see: רישום handler מותאם אישית לפרוטוקול  |  Articles  |  web.dev

I think (not positive) that the limitation of having to use ‘web+’ isn’t applicable if the link is being passed to a local app.

EDIT: see also: ajax - Is it possible to open custom URL scheme with Google Chrome? - Stack Overflow

7 Likes

In further thinking about this, a plugin isn’t needed at all, just an web-server app that registers it’s custom protocol handler with all installed web-browsers (eg. just as a torrent program registers itself to handle ‘torrent:’ protocols). Once the app has pulled the relevant information from SAFEnet, it then forwards the data as a web-server does back to the browser.

This would be nice as it could be packaged with the overall SAFE package and on install it would automatically register with the installed browsers. So nothing more for the end-user to do. Also only one app to develop, no -cross browser coding.

EDIT: To further this idea and to give an example of how to add a protocol handler for an external app in chrome on windows:

This will force Chrome to handle your protocol and it won’t prompt user
for confirmation (I don’t know how to allow the confirmation dialog)

Windows Paths:
XP: C:\Documents and Settings<USERNAME>\Local Settings\Application Data\Google\Chrome\User Data
Vista/7: C:\Users<USERNAME>\AppData\Local\Google\Chrome\User Data

Add your protocol to the ‘Local State’ file under the ‘protocol_handler’ section, as such:

“protocol_handler”: {
“excluded_schemes”: {

“YOUR_PROTOCOL_NAME”: false,

}

Source: Google Product Forums

It’s all easily doable IMO; if this is the preferred route however then links would have the SAFE:// nomenclature in front and wouldn’t be usable as the OP desires … so it really is an either/or proposition. Personally, as previously stated, I think SAFE:// is safer.

EDIT: While I’m thinking about all of this … how the local SAFE web-server idea might work:

  1. user inputs “SAFE:my-safe-site” (some safe address). Browser then dumps off to locally installed SAFE web-server.

  2. SAFE web server queries SAFEnet, if positive (or negative) response it creates a web-app containing relevant information onto the webserver directory (e.g. localhost/SAFE/). the server then sends a normal http:// address to the active browser containing the location of the web-server’s app – eg. http://localhost/SAFE/safe-web-app

  3. The web app has an outer shell with some javascript to push the browser state and show the proper URL (“SAFE:my-safe-site”); and an iframe that contains the requested data (website) from SAFEnet. If a negative reponse was received from SAFEnet, then web-app just shows a page-not-found 404 error.

6 Likes

Last week, I also had a similar thought :smiley:

The web pages should be supported on all the platforms. From mobile to desktop, we would expect the web pages to work as it does in the present web.

My motivation was to try a simpler solution that would make it easy for the end users and also for the application developers.

If we can build a simpler solution on top if the the existing standards and still server the content secure, it would be very ideal.

As you said a simple web server with RESTFul end points would make it easier for the devs to use. So the desktop applications can consume the RESTFul APIs from the local server. In addition to this, including a local http proxy for the requests ending with the .safenet TLD would make it easy for all browsers to use the same proxy server to serve the content. Worked on a small POC to check how things can work this way, it was promising indeed (personal opinion :wink: ). As a developer I could use the same set of tools for development and debugging and as an end user the configuration is very minimal.

I have detailed the approach here.
All your inputs would definitely help in getting it implemented in the best possible way. The final RFC should be open for discussion hopefully sooner

14 Likes

The approach is a good one, however, I do not believe - as you state in the PR - that it should be combined with the launcher. Rather, this would be a great application to stand separate from the launcher, being - as it is - able to pipe http pages into the browser.

If we limit the development of public content to http-compatible webpages, then using the launcher solely as a proxy server would work marvelously. However, does the Network not allow different applications - run natively on the desktop outside of the browser - to function in ways beyond what a http webpage - even with dynamic javascript - might be able to deliver?

2 Likes

Glad you liked the approach :smiley:

The reason for having it in the same application is to make it easier for getting started. We can also provide option in the application to start and stop the proxy as desired (Scale it as and when needed). I thought that managing the proxy and also the manage the application sessions connected from a single place would be better. Install one application and good to go! At this point of time, I see proxy to be a lighter component.

Certainly yes. The desktop applications can directly invoke the REST endpoints bypassing the proxy. Dynamic java scripts can consume the REST APIs like the desktop applications.

But for invoking REST APIs from browser based applications , the host name api.safenet is preferred instead of localhost:PORT. This would enable us to cater the dynamic javascript api calls on other platforms like mobile.

The launcher on mobile (some other platform) might not be a Server based implementation. It that case based on the host name, the request can be intercepted and the data can be served based on the platform specific implementation.

Am working on the REST API documentation for authorisation, NFS and DNS. Once that is completed, I believe it will be easier to relate.

10 Likes

As we agreed, the proxy is but one use-case of the Network, and it may not be the only implementation of a browser add-on. Would you seek to create a monopoly on the http proxy by irrevocably building this particular one into the launcher?

And using your same logic, would it follow to have to bundle into the launcher a VoIP protocol handler, or a SSH protocol handler? (really any existing or future protocol requiring daemon/server-like functionality)

Couldn’t external applications just make their own connections to the launcher as a service when necessary to start and stop their own daemons as desired?

2 Likes

hahaaaa sorry @TylerAbeoJordan I should have said AFAIK :smiley:

ok then! easier mobile support and one singled standard sounds awesome! This convinces me!
@smacz I’d vote pro integration - maybe a “disable” button in the preferences-category - but installing a singled app and everything works would be amazing :slight_smile:

this only leaves the security issue … people would need to be careful then …

1 Like

I’m super excited by this!!

4 Likes

I could imagine that the web-server could be able to have data pre-processing plugins and post-processing javascript ‘plugins’, allowing for different functionality within the browser and possibly subsequent modification/processing of data. Perhaps such would give the flexibility that @smacz desires?

I suppose there would be a security issue, but using something like php with the server would allow the use of some traditional web technology … although I don’t know if there would be any advantage to that and the complexity goes up quite a bit. Anyway, fun things to contemplate.

EDIT-ADD:

read through your github link …

Though the development and initial configurations are made easier, the user experience might not be great because the
user has to manually click on the prompt to authorise each and every time. An option to persist the authorisation
can be a feature that can be added in future to improve the experience.

Why would the option to persist be added in the future? Is this a difficult thing to do?

I am still not sold on using http in links to safenet data. Even with a custom tld, for browsers not configured to use a proxy for such tld, their browsers will run through traditional insecure routes, thus exposing the user (who may have been given the link) to malicious meta-data collectors.

Why can’t we use protocol handlers for all devices and all browsers? Is it true that mobile device browsers do not support protocol handling?

what about this?: https://lists.macosforge.org/pipermail/webkit-qt/2010-April/000466.html

Being able to use a wide range of tld’s is helpful to categorize the web, whether that web is the existing one or a new one being built in SAFEnet. It is also more recognizable to both web-developers and web-users.

Come use SAFEnet today, where you can have your creativity even more limited than you do with the existing net!

EDIT again: I must partially retract my view that “SAFE:” addresses will not end up being sent out to meta-data collectors (if an unrecognized protocol) … it turns out that most browsers by default now dump unrecognized strings to search engines!!! Holy moly, what a security breach. Anyway, this can be disabled (and should be!)

e.g. with firefox: Search the web from the address bar | Firefox Help

Still this leaves a security hole for most users who do not have a protocol handler for “SAFE:”. I guess for the average person who isn’t security conscience, they will inevitably face the wrath of bad actors collecting data.

Still my argument in favor of “SAFE:” isn’t dead as, for me at least, being able to use any tld I like is a big bonus.

3 Likes

True. I suppose though that other apps can also be packaged with the installer, and I think that will be the case, so I’m unsure what the problem would be here. Basic web-functionality from the beginning would be great. If someone develops a better system later and installs it, then that separate app could deactivate this web-server with user permission.

With this included in launcher, there will be little to no incentive to develop a http-rendering app.

Also, who gets the rewards for this App? The App (http proxy) should presumably be able to both PUT and GET data from the Network. So wouldn’t that deserve the same treatment as any other app?

But then, if that deserves the same treatment, wouldn’t other apps be dismayed that the incumbent app has such a significant advantage as to be included with one of the most critical aspects of the Network?

Now we have an App that is getting rewarded that is bundled together with a critical aspect of the Network? I can’t fathom where that’d be desirable long-term.

While I think a proxy server model would work excellently for the browser plugin, I am strongly opposed to irreversibly tying it to the “gateway to the network” - the launcher.

There is simply no need to do so.

Do one thing and do it well
Unix Philosophy - Wikipedia

That’s not to say though, that the addon can’t include launcher, routing, etc. as an optional part of it’s runtime…

I think you are overstating what this does. It is not a high hurdle for others to leap over … in fact it makes sense as a tool for BASIC functionality for the network. There has always been talk of incorporating basic tools for the network to have some functionality from the start - this is clearly within the boundaries of what the community expects in terms of basic functionality.

A web-server such as this isn’t a complicated affair and would form the foundation for others to build upon and to build plugins for – plugins that could do both pre and post html rendering. Such a pluggable server would open the door to many opportunities to external developers, much more so than any browser plugin would do - hence your view that this is somehow locking out future development is, IMO, unmerited.

Basic functionality of the Network consists of PUTs and GETs. The launcher is the “gateway to the Network”, not the “interpreter of data”.

Any interpretation of data should be done outside of the gateway. I agree that it isn’t a complicated affair, but I don’t believe that it is required by all Apps that need to use the Network - as is the case with the launcher. In fact, it should hopefully only be useful to a select few!

Security-wise, interpreting data at the same location as the verification of credentials and authorization of application access is, frankly, quite frightening. Despite the developer’s best security practices, this type of design permits way too much contact surface area between the two for my liking.

No, let’s call this what it is - an application that functions as an http proxy server. It is not a basic tool, it is an application. And the way that it connects to the Network is through the launcher, like any other application. And what was the given reason for including this in the launcher?

“Easier for getting started” and “better”. Well, shit - it’s hard to argue with such thought-out logic. But I guess I’ll keep trying.

Computers are really good at automating start-up processes - surprising no one. Using a bit of imagination, I could envision that the control of the proxy on/off be controlled by the addon button inside of the browser.

Not only is this more intuitive, this also functions inside of the browser which the end user will be using anyways. There should be no need to access a running daemon separately in order to turn on and off functionality if the application itself can control this behavior.

The real question here then is: “How much hand-holding does the end user require?”

The answer is a lot.

However, without giving examples, or trying to brainstorm a solution in this non-technical post, I will refer to my earlier point - Computers are really good at automating start-up processes.

The really difficult part here is to get the end user to get everything up and running. Installing software is a much more intense job than installing an addon. The fact is that the launcher (and the rest of the code required to participate in the Network) will have to be downloaded anyways. As far as I’m concerned, any App worth its weight in salt would include the ability to setup the Network’s code - along with its own config - if it is not already.

And that’s not to say that there won’t be bundles that include both the Network’s code as well as multiple applications - browser addon, litestuff, etc. - much like Linux distros do now. But in Linux there is a clear separation between kernel operations and userspace. A separation that translates perfectly to this conversation regarding the launcher and applications.

Once it’s up and running, the expected workflow for controlling an App’s behavior is - seemingly redundantly - to control it from the App itself. In this case, this means through the addon inside of the browser.

Requiring any further knowledge and effort from the end user is not “better” and not “easier to get started”.

2 Likes

So are you going to code all of these plugins and maintain them? At the end of the day you can be as smart-arse as you want about what you think is better … but how much work will it involve and who is going to do all of it?

  1. I’m not attached to a web-server being part of the launcher (that’s Krishna’s idea), but overall a web-server approach seems a simpler/easier means than plugins for every browser floating around out there. My thought was that this server should be part of the installer, not necessarily the launcher. The ‘E’ in SAFE is for everyone, not just those with a particular browser … unless of course we are building a browser specifically for SAFE to be included in the overall package.
  1. Why add the complication for the user? I’m not in favor of the ‘proxy-server’ method (Krishna’s idea again), but rather a means of handing off the link to the server via registering a protocol handler ‘SAFE:’ for the server.

Exactly. I’m not sure who’s point you are trying to make, yours or mine, but it feels like your arguments just validate my premises. Perhaps your debate is with Krishna.

Well, nothing has been decided tbh.

There are still two approaches open in the RFC PR one using the older IPC and the REST API approach. I thought of getting your ideas on these approaches to churn out the best. It is indeed turning out to be great. I have added the packaging as a point to be thought about, to my list. Will definitely update once I have a plan for it.

At present, the reward mechanism for developers is not implemented. Once that is in place then the request source can be trusted better, based on that the persisted data can be validated to be from the same source.

Thanks a lot for providing very helpful insights, @TylerAbeoJordan, and @smacz

4 Likes

Important point, I hadn’t thought of this. We need to sit down and consider the implications of different methods of browser support and their implications.

I think it’s just too valuable for ease of use and mass adoption to not support existing browsers, and I can’t see how we can mitigate this - except :slightly_smiling: to a small degree, by ensuring that all SAFE URLs behave like a normal URL when using a browser that doesn’t understand them! I refer of course to the following suggestion (maybe I should not have split the topics):

I think we should revisit this once we’ve thrashed out the options a bit more. Then look at the pros and cons again.

3 Likes

I think that the latest dev update has perhaps addressed this issue with a proxy rather than a local http server

This means that safe URLs will always be satisfied as far as the browser is concerned and not be sent to a search engine to “help you out”.

1 Like

That wasn’t the point though. We understand that if you have the SAFEnet software installed, then there won’t be an issue. It is when you don’t have it installed and someone send you a link to something that your particular nation-state deems illegal and you follow it unwittingly … then the meta-data collectors put you on a list.

Ultimately it seems there is nothing we can do in this case - which, IMO, is unfortunate.

Given that the above holds true, then it no longer bothers me too much which method is used - excepting that I hope it supports all browsers by default and that it is flexible and extendable i.e. that it will itself allow plugins.

I would though also add that it would be nice to use many different tld’s …

plus the “.safenet” is too long in any case IMO. So even though “SAFE:” doesn’t offer any security advantage, I think I still would prefer to go with that.

2 Likes