Paradigm shift for applications?

With safe net promising to deliver so many innovative approaches, my mind has started spinning over the implications of this. The more I think about it, the deeper I see the changes occurring.

I think we need to look back at the relatively recent history of Internet growth and the birth of web development. Client/server has become the default approach, with browsers becoming almost like an operating system in their own right; they have sandboxes and their own programming language and with the advent of HTML5, they have started to blur the line between application and website.

However, will what was born out of client/server still be fit for purpose in a world without servers? Even if it makes the transition, will it still be relevant?

Safe net will, for the first time, allow a distributed file store to follow users between different devices. These devices could be of various types and operating systems. While the browser has bridged this device gap thus far, will it persist when an app ceases to be deployed or installed, but is instead just always available, natively, via safe net? IMO, no it will not.

We only need to look to mobile to see how much apps are still appreciated. What would happen when this line is blurred further, with such apps being accessible securely and directly from safe net? Why would you want to use a clunky browser based application, when it is so easy to get.more feature rich apps directly? Why use a client/server technology, when the server is no longer needed?

Of course, there are challenges. Deploying different versions of apps for different systems is not a new problem and is a thorny issue. However, the web browser is not the only mechanism to resolve this issue and the alternatives have become better in the interim. Safe net could give these alternatives a fresh perspective.

What am I talking about? Virtualisation. While this is common on the server, it is relatively uncommon on the client. We have the likes of docker sweeping through the server side, virtualising the OS too. I see safe net paving the way for this on the client. Moreover, I see a fresh future for JVM (java virtual machine) applications, which is a similar concept at the application instead of the system level.

Why here, why now? Safe net lets us distribute data like never before. It allows us to access these technologies easily, where before it was difficult at the client end. Where previously the browser was the sane option, the browser becomes less compelling. Why run in a browser when you can break out of it?

There are interesting times ahead and this could be the next big shake up as data starts becoming fully distributed.


JVM was, is, and always will be a regression. Despite that, I really enjoyed your post, and see many of the same trends emerging.

A regression? Until all hardware is open sourced and becomes ubiquitous, there will be a place for virtualisation. I think the question is, what is the thinnest abstraction possible?

Imo, byte code compiled languages and their VMs are as close to the native system as possible and thus are a good fit here. Emulating a whole x86 system (on, say, ARM) is much less efficient. Running x86 binaries on ARM via an emulator would be dog slow.

Docker is interesting in terms of slimming down what must be virtualised, but you can’t run an x86 container on an ARM system without recompiling.

You can, however, run JVM classes on both ARM and x86 or any other architecture with VM support. You will also get full performance for the chosen platform. I think with ARM being such a big player now, arguably the JVM is more relevant than ever. With the right distribution mechanism it could become the cross platform target of choice - it certainly has more capability than a web browser.

Now that the JVM is open source and there are many languages with compilers for it, there is a lot of choice out there too and little vendor lock in.

1 Like


So you’re really touting the write once, run anywhere aspect more than others? That is a valid point, However, I think that it’d be quite easy to just write the thing in go and then compile on the target systems, since we’re already OK with being constrained by the JVM (and thus, more or less: constrained by Java)

I do get what you mean about a universal virtual machine though, and appreciate the angle. Do you have other thoughts on how this could be useful?

1 Like

What are the security implications here? I.e. Browser versus JVM? Does the virtualization keep information compartmentalized? Though provoking post, thanks.

I believe the client side will remain having different platforms for many years to come, such as Windows, OS X, Linux, iOS and Android.

However a distributed virtual machine application platform on top of the SAFE network is something that can be developed much faster, within a year or so. JVM is one candidate for such platform. Another candidate is JavaScript which is highly optimized for performance nowadays.

The reason for such application platform is to make very large scale applications practically possible. Think for example of a search engine crawler in Google size. That’s a way too big application to run as a single SAFE application, while on a distributed application platform it’s easy to implement.


I within the WORA aspect is extremely useful when you have widely dispersed platforms. It had always been possible to compile for different targets, but the JVM was born out of problems with this process. For example, different systems have different libraries/services (including those at the UI layer), which make this process more complex than just compiling to an intermediary JVM style language.

The latest breed of languages seem to be trying their best to avoid dependency issues. I gather the package management anf compilation process on Golang is relatively simple. Whether that can scale out to support widely ranging platforms is unproven though. It still requires builds to be maintained for all platforms too, which is more effort for the developers (including QA, support).

I think the use case for the abstract virtual machine has been well made. Developing a virtual machine which deals with device/OS issues once, rather than every application development team doing it over and over, would seem most efficient. I can’t envisage this gap being bridged by more similar platforms and better compilers at this stage.

I think it is key to note a few things:

  1. The JVM is very fast these days, with little difference between it is native code. Indeed, immature native compilers often produce code which runs slower (Golang and Rust for example).

  2. The reference JVM (openjdk) is now open source and uses the GPL2 license. Big names also contribute to the project, including IBM and Google.

  3. The JVM runs compiled byte code, not Java code. Java compiles to this byte code, but so do any languages which compile code to run on the JVM. Scala, Groovy, etc all compile down to byte code without a line of Java being written.

  4. The JVM is becoming an open, universal, abstract virtual machine format. Embracing this with a shared, distributed, data store could lead to a great user experience.


Both a browser and a JVM are sandboxes. That isn’t to say exploits aren’t possible, but both mechanisms attempt to shield the application from the OS.

It is comparing apples with oranges somewhat though. Javascript in a browser can do far less thanot a Java program in a JVM. You could perhaps compare node.js with java for features, but then node.js would have a different set of security concerns compared to java (e.g

It seems like keeping javascript within the browser sandbox is safe, but you trade off functionality.

1 Like

Interesting. Speed is also a big issue - and not speed itself, but the appearance of speed. The ability to use separate js and html to break up execution into smaller bits as it loads gives the user the ability to interact quickly. JVM would be faster than js overall though – excepting perhaps certain math functions?

What about using both js and JVM together? AND

I don’t know a lot about this stuff anymore, my coding brain melted down a couple of decades ago - although I still do occassional bursts of scripting. I follow some of this on a meta-level and I like the sound of this idea though, if the speed factor isn’t an issue. I’ve been worried for some time about latency on the SAFE network and how this would affect the users experience, so this idea compounds the uncertainty with respect to JVM apps. But perhaps it could be mitigated if good approaches to the problem (speed perception) are found.


1 Like

What about WebASM ? It’s sandboxed, crossplatform and fast.
Furthermore you can use the language you want by compiling it to WebASM.