Assimilating the World Wide Web with the SAFE network

Here is a speculation about the really big and long-term picture. It’s almost like a delusion of grandeur scenario, but who knows! Maybe it can be done.

Imagine in the future that the SAFE network has become really big and starts to assimilate the World Wide Web. Instead of http(s)://www.google.com the SAFE version is safe://www.google.com. In this way the whole Web can be assimilated during a transition period where more and more websites are moved into the SAFE network.

This can be accomplished by a distributed web server layer on top of the SAFE network. Instead of only running apps locally, there is a separation between code running locally in the browsers/clients and code running in the distributed web servers. This makes the porting of existing (WWW) web sites easy. And it’s also really useful for the Internet of Things (IoT).

10 Likes

and the route into that might be storage of big data initially - making that trivial will be key. Data first, then website content in parallel providing users a choice of safe or clearnet; then perhaps much later a total commit to safe - at least for newer sites.

2 Likes

But safe sites still could not connect to clearnet sites so I fail to see the point. And could you please define "distributed server layer? How do you intend to ensure security while pulling this all off? One of the reasons SAFE cannot connect to the clearnet is for security reasons as data would need to be decrypted in order to be uploaded to a server and servers are inherently vunerable not only to hacking but also geopolitical law.

Ken Wilber recently had some interesting, in my opinion, criticism of the net and Google. Its in the article Trump and Post Truth.

The decentralized web servers can run in farming software. So for example if a user visits safe://www.apple.com then the website itself is centrally owned by Apple Inc. but the execution of the server code is pseudorandomly (and dynamically) distributed among the farmers. So Apple has no control over where on the SAFE network its data is stored or where their server code is run.

Some of the benefits with a decentralized server layer:

  • No centralized control over DNS needed such as ICANN.
  • No centralized control over domain name renewals such as GoDaddy.
  • No centralized authorities for SSL certificates needed.
  • No government can shut down or censor the web servers.

The SAFE web is totally separate from the ordinary web. Website owners have to port their content and code to the SAFE network.

The security for the decentralized web servers is similar to how the SAFE network stores data. Instead of storing data at multiple locations the decentralized web server layer executes the code in several and pseudorandomly distributed farmers with automatic consensus checks.

Managing SAFE domain names might be tricky if there is no connection to the ordinary Web. One solution, if it’s possible to implement in practice, is to allow multiple domain name registrations. So for example anyone can register safe://www.apple.com (the naked domain apple.com) and the SAFE network automatically ranks the multiple registrations. The real Apple website will quickly become #1 in the ranking list and will be used by the distributed web server layer.

It is worth noting that powerful forces are trying to push the clear net towards serverless computing. See Amazon Lambda and Google Serverless for details.

IMO, this is great for safe net on a few levels:

  1. Amazon and Google are still managing servers for these products. It is only serverless from the consumer perspective. Safe net can build off the back of this marketing push to present ‘real’ serverless computing. I.e. self managing infrastructure which provides these services without needing the corporate middleman.

  2. Amazon and Google pushing this technology eases people into the concept. They have deep pockets and strong influence in the IT industry. They can do a lot of heavy lifting for us.

8 Likes

I’d be careful about going down that confusing route. ‘Serverless’ is just the latest level of abstraction in the cloud, the confusing industry jargon term for ‘Function-as-a-Service’. It is ‘serverless’ from a developer’s point of view as you don’t need to spin up a VM or worry about configurations. You can just upload a zipfile containing your Javascript function and it will run. Very useful for IoT, microservices and event-driven processing, running simple programs quickly and cheaply, but not designed for anything else, particularly data storage, hosting websites, etc.

It’s a very different thing from the decentralised model that MaidSafe and others are pursuing. In fact it’s arguably even more centralising than the cloud currently is, because you get locked into using other services by AWS, Google etc. If anything it will mean more physical servers, not fewer.

Annoyingly, since behemoths like Amazon and Google (also Microsoft and IBM) have latched onto the term it’s likely that ‘serverless’ will come to mean what they are doing (i.e. not really serverless at all) rather than a network run without central servers. So to avoid confusion I would steer clear of using the term to describe SAFE.

4 Likes

‘Call that a knife?! THIS is a knife!’

Sometimes you just have to build on the marketing of others, even if it isn’t a perfect fit, IMO.

5 Likes

On the SAFE network what looks like a local drive with private directories to the user is in reality encrypted chunks spread all across the network. Similarly, the idea with the decentralized web server layer is that what looks like a single web server to the website owner is in reality code run by farmers all over the network.

So instead of using the term serverless it’s more like virtual servers that appear centralized to the developers while in reality are completely decentralized chunks of code dynamically and pseudorandomly spread out among the farmers. The same code snippet can at one time be executed by farmers A, B and C and the next time the same code is run by farmers D, E and F. It’s impossible to predict beforehand which farmers will run a particular code snippet.

But it’s using a safe: address it’s already stored on SAFE and doesn’t connect to the clearnet. And we can already create websites on SAFE so what’s the advantage of creating “decentralized web servers” or whatever. Why not have apple create a website and upload it to SAFE like everyone else? And again how does this connect to the clearnet?

But this would still require content be retrieved by the clearnet and IPs be exposed at some pooint which would be a security breach.

The SAFE web servers don’t need the ordinary Web (neither clearnet nor darknet). The decentralized web server layer is completely separate from the traditional World Wide Web.[quote=“Blindsite2k, post:11, topic:12788”]
But this would still require content be retrieved by the clearnet and IPs be exposed at some pooint which would be a security breach.
[/quote]

Both the content and the code is all on the SAFE network. And the reply to your question in the previous quote about why doesn’t Apple just use the existing web functionality on the SAFE network? As a use case (a worst case scenario basically) let’s say that Google wants to port its search engine to the SAFE network. How would Google be able to port their entire search engine to the SAFE network as a single local app? With the decentralized web server layer on the other hand it’s doable. Sure, it’s an absolute massive porting of data and code needed, but they can start with a slim version and gradually build the SAFE version of their search engine.

Even for small websites a distributed web server layer is useful. For example for security reasons a website doesn’t want to run certain code in local apps since it then can be hacked and messed with. The distributed web server layer allows code snippets to be executed anonymously by scrubbing the information about which website the code belongs to before each snippet is sent to pseudorandomly selected farmers to run on. And each snippet is run in say 3 separate farmers and for security reasons there has to be consensus among those farmers to prevent the farming software from being hacked. The cost of computation redundancy can be compared to the cost of data storage redundancy (such as storing each chunk in 4 separate farmers).

1 Like

How would this server layer be different to SAFE? You’ve yet to fully respond to that question. This request implies that there is some sort of disadvantage or difficulty in a direct transfer to SAFE. What are they? Would this server layer be free to users? How would abuse be prevented? Will farmers go without compensation? Will XOR addressing be in play?

So many questions. Please be as detailed as possible. Structure it similar to an RFC if possible. Explain the exact mechanics, its benefits and drawbacks. Then we can have a productive discussion. 13 posts into this and I’ve yet to see anything concrete. Try to consider it from the perspective of someone who isn’t in your head. :wink:

You realize that the search engine Google is just an algorithm right? They index a great portion of the web and cache sites but this would be of little benefit in SAFE. The index is relatively small and the cache would be wasteful regardless to weather they used your server model or SAFE.

Take something as simple as a cron job. How to implement that on the SAFE network? Or even a simple search engine crawler. How to develop that as a single SAFE app?

And what prevents hackers from messing with the code if all the logic is run in local apps?

What stops hackers from hijacking your machine or serving you malware? Diligence security practice.

No need. The search engine will be retrieved when the user loads up the page safe://searchfoo. Since SAFE prevents tampering with files on the network you can be sure you were served a clean version of the search page/engine as the dev intended. Once you type in your query it will search its associated index and serve you the links. Done! Should be feasible. Though an integrated search engine/index would be far superior. @Tim87 suggested a data type that would work on the routing layer. Indexing as data is stored on the network. I hope they revisit that idea a few months from now.

I meant that hackers can access the code directly on their local computers. Let’s say that an app stores meta data on the SAFE network. And instead of storing the correct meta data the hacker changes the code so that the meta data becomes corrupted or deleted. And if the meta data is used globally for all users the hacker can destroy the entire app for all users!

In a client-server model, such sensitive storage operations are handled by the server. In the current SAFE app model all operations are moved into the local app and thereby become vulnerable to attacks directly in the hackers’ local computers. Seems like a risky and shaky model to me.

I’m sorry but that makes no sense to me. Whatever the hacker manipulates on his local machine does nothing to effect other users. The apps are retrieved from the network and validated using cryptography means. If you believe that the app can be changed on the hackers machine before propagating those changes throughout the network then I think you may have an inaccurate understanding.

I don’t know the details of how it’s implemented but even with an app loader I can’t see how a hacker can be prevented from accessing the code once it’s loaded into the memory of the local computer.

It’s not prevented. It just doesn’t matter. He won’t have the permission on the network to change the app itself. The app is the same as any data stored on the network. It’s immutable without permission. Without the developers’ private key nothing can be done. The data you can manipulate is your own in regards to your personal profile.

For instance if I upload a video onto the network and share it with random people a user will be able to download it onto their machine and change a few frames but he wont be able to re-upload that changed version of the video in place of my own video (the original).

People accessing MY link will always receive MY video. Without this feature SAFEnet would not work as intended. The network acts as an autonomous high security administrator. When it comes to data security there is nothing like it. :sunglasses:

2 Likes

Why would we want virtual servers in the first place? Doesn’t this just mean more censorship and centralization? We create a whole new internet to get rid of servers and now you’re suggesting the creation of virtual ones. I say if Google can’t adapt to SAFE they should go bankrupt competing with the new paradigm.

3 Likes