Introducing the SAFE API library for web-apps + ReactJS helpers + demo app

… which i should’ve mentioned is the better way to do it :scream_cat:

Thanks everyone - you’ve helped a lot - much clearer to me now :slight_smile:

Service Workers can alleviate much of that. Basically, a website can take residence in your browser. This is how things like Google Drive can work while you’re offline.

I’m not sure this addresses the issue - I think it provides a similar effect to “offline first” applications (which allows apps to continue to function if the network is unavailable). This means replicating the data locally and synching any changes when the network becomes available.

That’s one thing you can do with it, but nobody forces you to go that far. You can just use it to cache some of the app code to save a couple roundtrips on page load.

1 Like

That doesn’t address the issue really. Caching is good, but first load experience is important.

When you load a “regular” page, you load code and data together.

When you load a dynamic page, it first loads the code, then it loads the data. You don’t load more, it’s just not in one monolithic document. If you do it well, the loading of the data can be started even before all of the code is loaded.

In reality, not even regular pages are monolithic: first you load the HTML, then you load the referenced images, style sheets, scripts; it’s really not that different. Well, pipelining does affect it a lot, and HTML/2 is sure a different beast. But then SAFE gives you the ultimate pipelining: you retrieve your content from thousands of “servers” at the same time!

Well, with higher latency… But a sluggish app is for sure more annoying then one that took 5 seconds, not just 2, so the caching can help.

What is different:

  • we expect data access latency to be significantly higher on SAFEnetwork than the internet
  • dynamic websites hosted on conventional servers allow the data access to be done at the same location as the html, which reduces latency considerably
  • this is more of an issue for data/content than for html/css because the latter are loaded early because they are in the HTML header, and do not require all javascript to be loaded and initialised before they can be accessed (while data references in a URL query string can only be accessed by javascript in this implementation)

… which is why we’ll need to cache as much of the app as we can.

I believe you refer to that additional requests can reuse the same connection, and it’s correct. SAFE, by its architecture, can deliver content in a highly parallel fashion, which may or may not offset for the increased delay. We’ll see.

This is really just a design decision. The main app file could be a stub of just a few lines, which would bootstrap the loading of the actual code, but would also start loading the data.

Agreed, we don’t know what the performance will be, only that these are potential issues, and that caching can’t address the first load experience.

I am not sure this is necessarily a bad thing. We will know for sure as the network approaches performance testing.

If you have monolithic sites, you can’t render the page until you have both the html and the data, as produced sequentially by the server side code. However, both can be downloaded in parallel if you are using javascript, as long as you have the code. You get parallelism that you wouldn’t otherwise be getting.

Also, the fact that the code, data and html may be distant shouldn’t really matter, unless your code is very chatty with the data source. If you can get al the data you need in a handful of requests, it should be more about throughput than latency. It is almost always good practice to be less chatty with the data source too, even with monolithic pages.

I suspect it will come down to how high the latency can get before it becomes an annoying dependency. There may be different optimisations to make regarding greater vs fewer client side includes, for example. When a single host is serving all data, it makes sense to combine files to reduce requests, but the opposite may be desirable on safe net.

If bigger works better, I can imagine people will start using data URIs more. One could pack all resources into a single gzip compressed JSON file, or in a script tag inside the HTML itself, and there it is, a self-contained app in one request. Everything else is just loading the data.

I wonder if caching in structured data might be a good way around this…

So: Messaging is used as one SD can be rewritten a lot. Say you have a homepage as a SD; it can reference posts. But those posts could also be written to the SD. So one GET gets the lot, reducing latency and requests etc.

BUTTTT the original POST owner doesn’t GET. A problem? Probably. (And there’s nothing to stop this happening with public data in any scenario).

SooOOooo: Maybe reputable sites will cache the page (with the most up to date posts etc) in the SD, but then also trigger GET requests for the actual content as well… These GETs wont necessarily update the page. But they are cursory to reward the viewing etc. (I suppose showing excerpts of a POST would raise the same questions).

It’s not the most efficient way of doing this. But it could be a way around long delays as a page is loaded with content from all over SAFE…

@Traktion:

If you have monolithic sites, you can’t render the page until you have both the html and the data, as produced sequentially by the server side code. However, both can be downloaded in parallel if you are using javascript, as long as you have the code. You get parallelism that you wouldn’t otherwise be getting.

I proposed a design that would allow this, but as it stands, the data can’t be requested until the html has loaded and run the javascript that requests the data. Other data / CSS etc can indeed be loaded in parallel, but that is not much different from the current internet because the HTTP protocol is asynchronous (so all embedded embedded file requests do normally proceed in parallel).

My reason for devising a way to allow the data request to proceed in parallel with the html (and other embedded files) was to maximise the advantage of SAFEnetwork’s parallelism, to mitigate it’s higher latency.

The impact of these issues will I agree vary depending on how the websites are built, and of course on the actual performance of SAFEnetwork. It will be very interesting to see :slight_smile:

SAFE web apps would be better off as immutable data; anything else is too risky compared to the “zero risk, because immutable” property of, well, immutability.

In fact, SD should not be used anywhere where it can be avoided; it’s just there to give us that tiny (but necessary) flexibility that we need to be able to live in a world of steel and concrete and unchangeness. Let’s use it responsibly. SAGE MODE OFF :kissing_cat:

@joshuef:

It’s not the most efficient way of doing this. But it could be a way around long delays as a page is loaded with content from all over SAFE…

I’ve been thinking of similar structures (kept me awake last night :slight_smile:) that could form part of a SAFEnetwork framework, but not for performance reasons. I agree that’s a neat idea, although there’s no way to measure popularity AFAIK.

I still see the main issue as first load response time, because accessing subsequent pages on the same site will indeed benefit from caching, or as others suggested, just loading data and not the html/css/frameworks.

I wonder if something like this would solve that. It’s just plain HTML5:

<!doctype html>
<meta charset=utf-8>
<title>My Lovely App :heart_eyes_cat:</title>
<script src="http://awesome.safenet/app.js" async></script>
<script>
    var xhr = new XMLHttpRequest(),
        id = document.location.hash.substr(1),
        data = null;
    xhr.open('GET', 'http://awesome.safenet/data/' + id);
    xhr.send(null);
    xhr.onreadystatechange = () => {
        if(xhr.readyState === 4 && xhr.status === 200) {
            data = xhr.responseText;
        }
    };
</script>

Please note the cat emoji in the title. It’s there, it’s just hard to notice :scream_cat:

1 Like

SDs cannot be cached, precisely because they are mutable.

There is literally no way for collaborative apps to be created without using SDs. I’d say they will play a major role in most apps.

5 Likes

Which is exactly what I meant. Whenever stuff must be editable, they are necessary. Otherwise, let’s stick with immutables. Especially for code.

3 Likes