SAFE URL: "safe://" cross browser support revisited!

With this included in launcher, there will be little to no incentive to develop a http-rendering app.

Also, who gets the rewards for this App? The App (http proxy) should presumably be able to both PUT and GET data from the Network. So wouldn’t that deserve the same treatment as any other app?

But then, if that deserves the same treatment, wouldn’t other apps be dismayed that the incumbent app has such a significant advantage as to be included with one of the most critical aspects of the Network?

Now we have an App that is getting rewarded that is bundled together with a critical aspect of the Network? I can’t fathom where that’d be desirable long-term.

While I think a proxy server model would work excellently for the browser plugin, I am strongly opposed to irreversibly tying it to the “gateway to the network” - the launcher.

There is simply no need to do so.

Do one thing and do it well
Unix Philosophy - Wikipedia

That’s not to say though, that the addon can’t include launcher, routing, etc. as an optional part of it’s runtime…

I think you are overstating what this does. It is not a high hurdle for others to leap over … in fact it makes sense as a tool for BASIC functionality for the network. There has always been talk of incorporating basic tools for the network to have some functionality from the start - this is clearly within the boundaries of what the community expects in terms of basic functionality.

A web-server such as this isn’t a complicated affair and would form the foundation for others to build upon and to build plugins for – plugins that could do both pre and post html rendering. Such a pluggable server would open the door to many opportunities to external developers, much more so than any browser plugin would do - hence your view that this is somehow locking out future development is, IMO, unmerited.

Basic functionality of the Network consists of PUTs and GETs. The launcher is the “gateway to the Network”, not the “interpreter of data”.

Any interpretation of data should be done outside of the gateway. I agree that it isn’t a complicated affair, but I don’t believe that it is required by all Apps that need to use the Network - as is the case with the launcher. In fact, it should hopefully only be useful to a select few!

Security-wise, interpreting data at the same location as the verification of credentials and authorization of application access is, frankly, quite frightening. Despite the developer’s best security practices, this type of design permits way too much contact surface area between the two for my liking.

No, let’s call this what it is - an application that functions as an http proxy server. It is not a basic tool, it is an application. And the way that it connects to the Network is through the launcher, like any other application. And what was the given reason for including this in the launcher?

“Easier for getting started” and “better”. Well, shit - it’s hard to argue with such thought-out logic. But I guess I’ll keep trying.

Computers are really good at automating start-up processes - surprising no one. Using a bit of imagination, I could envision that the control of the proxy on/off be controlled by the addon button inside of the browser.

Not only is this more intuitive, this also functions inside of the browser which the end user will be using anyways. There should be no need to access a running daemon separately in order to turn on and off functionality if the application itself can control this behavior.

The real question here then is: “How much hand-holding does the end user require?”

The answer is a lot.

However, without giving examples, or trying to brainstorm a solution in this non-technical post, I will refer to my earlier point - Computers are really good at automating start-up processes.

The really difficult part here is to get the end user to get everything up and running. Installing software is a much more intense job than installing an addon. The fact is that the launcher (and the rest of the code required to participate in the Network) will have to be downloaded anyways. As far as I’m concerned, any App worth its weight in salt would include the ability to setup the Network’s code - along with its own config - if it is not already.

And that’s not to say that there won’t be bundles that include both the Network’s code as well as multiple applications - browser addon, litestuff, etc. - much like Linux distros do now. But in Linux there is a clear separation between kernel operations and userspace. A separation that translates perfectly to this conversation regarding the launcher and applications.

Once it’s up and running, the expected workflow for controlling an App’s behavior is - seemingly redundantly - to control it from the App itself. In this case, this means through the addon inside of the browser.

Requiring any further knowledge and effort from the end user is not “better” and not “easier to get started”.

2 Likes

So are you going to code all of these plugins and maintain them? At the end of the day you can be as smart-arse as you want about what you think is better … but how much work will it involve and who is going to do all of it?

  1. I’m not attached to a web-server being part of the launcher (that’s Krishna’s idea), but overall a web-server approach seems a simpler/easier means than plugins for every browser floating around out there. My thought was that this server should be part of the installer, not necessarily the launcher. The ‘E’ in SAFE is for everyone, not just those with a particular browser … unless of course we are building a browser specifically for SAFE to be included in the overall package.
  1. Why add the complication for the user? I’m not in favor of the ‘proxy-server’ method (Krishna’s idea again), but rather a means of handing off the link to the server via registering a protocol handler ‘SAFE:’ for the server.

Exactly. I’m not sure who’s point you are trying to make, yours or mine, but it feels like your arguments just validate my premises. Perhaps your debate is with Krishna.

Well, nothing has been decided tbh.

There are still two approaches open in the RFC PR one using the older IPC and the REST API approach. I thought of getting your ideas on these approaches to churn out the best. It is indeed turning out to be great. I have added the packaging as a point to be thought about, to my list. Will definitely update once I have a plan for it.

At present, the reward mechanism for developers is not implemented. Once that is in place then the request source can be trusted better, based on that the persisted data can be validated to be from the same source.

Thanks a lot for providing very helpful insights, @TylerAbeoJordan, and @smacz

4 Likes

Important point, I hadn’t thought of this. We need to sit down and consider the implications of different methods of browser support and their implications.

I think it’s just too valuable for ease of use and mass adoption to not support existing browsers, and I can’t see how we can mitigate this - except :slightly_smiling: to a small degree, by ensuring that all SAFE URLs behave like a normal URL when using a browser that doesn’t understand them! I refer of course to the following suggestion (maybe I should not have split the topics):

I think we should revisit this once we’ve thrashed out the options a bit more. Then look at the pros and cons again.

3 Likes

I think that the latest dev update has perhaps addressed this issue with a proxy rather than a local http server

This means that safe URLs will always be satisfied as far as the browser is concerned and not be sent to a search engine to “help you out”.

1 Like

That wasn’t the point though. We understand that if you have the SAFEnet software installed, then there won’t be an issue. It is when you don’t have it installed and someone send you a link to something that your particular nation-state deems illegal and you follow it unwittingly … then the meta-data collectors put you on a list.

Ultimately it seems there is nothing we can do in this case - which, IMO, is unfortunate.

Given that the above holds true, then it no longer bothers me too much which method is used - excepting that I hope it supports all browsers by default and that it is flexible and extendable i.e. that it will itself allow plugins.

I would though also add that it would be nice to use many different tld’s …

plus the “.safenet” is too long in any case IMO. So even though “SAFE:” doesn’t offer any security advantage, I think I still would prefer to go with that.

2 Likes

Problem is clashing with existing TLDs (if “.safe” isn’t allocated it would seem likely to go as it’s such a nice property in this climate, but didn’t someone say it was allocated?) or choosing something that subsequently gets allocated. Even “.safenet” risks that I think.

Choosing something silly like “.onion” is some protection, but can we think up anything that we’d like to use? And it still risks attack by this method.

IMO this is another argument for using a real website address, because it would be much cheaper to own than a TLD - which also means we could easily use many rather than just one. So, maybe a :+1: for safenetwork.net “mirrors”.

1 Like

If we use SAFE: then we can use any tld without a technical clash.

1 Like

I would also add that SAFEnet is IMO a new internet and thus we need not be concerned with compatibility with the old … further and more importantly, those with existing brands “domain.tld” would be able to use their same brand with SAFE: — that’s a huge plus.

3 Likes

Yes making a whole new internet protocol “safe:” instead of “http:” is very important and that’s the project that I thought I was backing

2 Likes

I think the point is to provide a way for existing browsers to access the SAFE protocol.

This is not defining the SAFE protocol but providing a bridge for existing browsers to access the protocol. A way to tell the browser plugin that this old style URL is to be translated into the safe protocol.

If a browser refuses to accept a new protocol method, then an alternative is needed. So we need a way to bridge the browser to the SAFE protocol.

1 Like

This is true.

When the Network is not relying on FF, IE, Chrome, etc., there will be absolutely no reason to use safe: or any TLD’s.

This is just because Chrome wanted to piss us off, and they’re doing quite a good job of it.

4 Likes

Read the fourth post in this thread. This problem has been addressed. “SAFE:” works.

2 Likes

Not according to the devs.

Which I argued as well (in the thread that this was split from). I would request that they revisit the matter in light of further google search results.

Or else explain their reasoning if there’s a gaping hole that we both overlooked @TylerAbeoJordan

4 Likes

Maybe @dirvine would weigh in? Or whomever is ‘in charge’ of this. Would be nice to have a full discussion and resolution to this problem/issue i.e.
[web-server + custom protocol handler (SAFE:)] --> see post four of this thread.
versus
[plugin + fixed tld (.safenet)]
versus
[custom browser]
versus
other?

2 Likes

Given the 19th of Jan dev update, it seems @Krishna_Kumar is integrating the proxy server idea which, I believe, will intercept all requests using a particular tld (e.g. .safenet).

I suppose then that if we want to use a “SAFE:” protocol handler, we may have to build this separately with a separate web-server system … or our own customized browser.

1 Like

If we are focusing on the new browser then it means we need to build html / css parser, and then build a javascript engine or a new scripting engine. I found a blog that starts on building a new toy-browser. It is a toy browser but it could lead to something better.

Or… why not just use servo? It has android support. Servo supports https:// and file://. No doubt that safe:// and ipfs:// can work as well.

Or we could take redox approach, everything is a url.

3 Likes

+1 for servo. That would be freakin’ sweet.

Still not-so-privately pissed about the integration. Bold move Maidsafe (the company)…bold move.

3 Likes