You can’t serve dynamic content directly from Safe, you still need a classic web server to do this. Maybe you’ll be able to connect to a classic server through Safe routing but afaik we still need the normal web for a centralized dynamic server. Safe will be used to store decentralized data. Applications are run locally and only interface with that data. I might be missing something though and can’t wait to see the first applications that will use Safe in creative ways.
As it stands, the SAFE network is only serving up static bits and bytes.
A lot of the server-side processing will become defunct. SAFE is built as a better alternative to the server-client architecture, and so generating a PHP page as you know it will not work the same as a decentralized website/application.
Eventually, in the very far future, once vaults can provide distributed computations as well you’ll be able to tie those computations into sites to create truly dynamic content.
(There is a 3rd option of having a dedicated server retrieve files from the SAFE network, alter them, and reupload them for individual clients and redirect them there, but that’s basically the server architecture of today, and we’re desperately trying to get away from that)
i believe it’s possible with server site uploading to a no-sql db served on safe …
Thanks for the replies. [quote=“DavidMtl, post:2, topic:7440”]
You can’t serve dynamic content directly from Safe
As I suspected from reading other posts on the forum. [quote=“DavidMtl, post:2, topic:7440”]
Safe will be used to store decentralized data. Applications are run locally and only interface with that data.
Great, it’s nice to have a little more clarity on this. As the network stands, it may prove to be useful to dynamic sites operating on the old web to use Safe as data storage / backend. Or more simply:[quote=“Powersign, post:3, topic:7440”]
So I guess what I’m asking is, what are the options for the network in the future for serving dynamic content? Does structured data help solve part of the problem? And what is this about distributed computations?
The problem with that is you’re still bottlenecking the entire website/application.
One of the coolest features of SAFE is that it provides near infinite scalability for applications, because SAFE will offer simple key/store manipulations. This happens because applications SHOULD be built completely decentralized, as the network itself is.
That means that you’ll see a bunch of negatives caused by forcing decentralized SAFE clients to use the defunct server-client interface. For example, you’ll be stuck with modern scaling problems (you need more and more servers to keep up with demand). Additionally, the network overhead seen by the client will be too much compared to the speed of serving up an actually decentralized app.
It also seems more reasonable to just write your server-processing-application (let’s call it a video encoder) as a client side library, so the client is doing the work of encoding, and then verify their work (zkSnarks!).
I think there’s a lot of dependants on the use-case. But this is a new paradigm of application building that shouldn’t need a server to dynamically insert your username into the page if you’re logged. It’s archaic.
I answered some of that above, but essentially it’s a new way of serving up data, and you’re not requesting it from a specific server (as you would when watching Netflix). Instead, files are stored in multiple places, and the network is trying to serve them up to you as fast as it can. This includes intermediate caching by vaults.
Structured data is a ballgame I’m not fully prepared for, although this might help.
Distributed computations = I ask you to solve a really hard math problem. You solve it and send back the solution with a special verification key (that you created as you solved the math problem). I can then verify your solution against the verification key. I used only a tiny amount of processing power myself to merely verify your solution, while you did the bulk of the work actually solving the problem.
This project aims to solve this issue - http://safepress.io/
Would be awesome if we could get things like this to work on .safenet sites :
The web is only as good as current generation of tools for building it are. The right tools come from the right goals. As I see it, eventually web will work by the fully reactive principles. This post lays out the map of unsolved problems and discusses possible approaches to them.
Modern web does a good job of bringing you live, real-time web applications. Or does it? This post looks at what is missing from the current state-of-the-art web architectures, where they should be improved and what tools we have at hand for that.
Server not required
The Quest for the Holy Grail of the Web
This is the high-level overview of the problem as I see it:
With a native SAFEbrowser, this guys on the right path?
How immutability, functional programming, databases and reactivity change front-end
Continuing the discussion from NEO4J databases and other noSQL databases:
Continuing the discussion from SAFE Network + database-driven websites:
Continuing the discussion from APPs rewarding Content:
Continuing the discussion from What current NoSQL DB is most similar to what the SAFE API will look like?:
I can be wrong, but sometimes it will be necessary to make apps in a frontend/backend way. For example: a search engine like Google, or an online classroom with monthly paid subscriptions (ex. codeschool), or a two-factor authentication engine (with SMS confirmation).
What I can think to solve this problem is:
- MPID messaging: your app can communicate with your server using safe messages instead traditional tcp/http connections - no one will be able to identify server or client IP with this solution;
- StructuredData as communication: instead of MPID, you can create a communication between your client/server by using SDs - it’s a lazy choice
- Safe decentralized computing: on future, I believe, safe will implement some kind of decentralized computing, but I think it will be for Map/Reduce operations rather than general computing;
- Develop your own decentralized computing network: if you need a back end but you don’t want to rely in a centralized server, you can also create your own decentralized computing: create an alt coin + allow users to mining those coins + the mining process is actually the solving of backend processing. I believe SafeSearch’s project will rely on that. But for most cases, this is too much work for a little gain;
Of course, I’m not recommending you to develop your apps with a back-end in mind - you would be underusing SAFE network if you do it. But if you really need to do it, like for example, to develop a platform to manage paid subscriptions and sell it as a service, you have the above options.
Also, I’m not a Safe expert, I can be wrong on my assertions.
UPDATE: I removed the “real-time” word from “StructuredData as a real-time communication” to avoid confusion.
I believe “better” is a highly subjective concept. For many things, SAFE is definitely worse (including anything that requires low latency communication). For other things, it’s infinitely more superior.
Define “real-time” In XOR space, stuff gets bounced around the world first. That introduces much latency.
Define “real-time” In XOR space, stuff gets bounced around the world first. That introduces much latency.
I should had put this in quotation marks. I meant “real-time” as a chat app “real-time”. We are talking on a website context, not on low latency apps like a voip app. Besides the expected high latency from a network like this, a SD messaging solution would be based on polling methods, which is even slower.
I could never figure out if there’s a way to get notified if a certain piece of Structured Data gets modified (pub/sub type of thing); I wouldn’t think so, but I may be wrong. Then we’re back to polling, and that adds up to a decent overhead if we have to do it for multiple pieces of content.
A “SD messaging” method applied to a search engine app could be implemented like this:
- we have a queue named “queue_n”, where “n” is a sequential number - all user’s queries goes there;
- when an user wants to search for “cat videos”, he will create a new SD named “queue_m”, where m is n+1;
- inside this SD, there’s a random unique number (ex.: “sha1(microtime())”) and the message - in this case a keyword like “cat videos”;
- the search engine would be constantly looking for the last parsed n + 1 SD (at 100ms intervals, for example);
- when the new SD is created, the search engine process it and save its results on a new SD named “results_unique_number”;
- the client app, is looking for a SD with this unique_number at a 100ms intervals (for example);
You can use the half-interval method to search for “n” value, your server can save the last parsed “n” value in a public SD to reduce client lookups, you can remove both client and server SDs after some time, you can encrypt the data content, etc.
Of course, this is not the-best-solution-for-all-kind-of-communications-scenarios (I wouldn’t use it on a MMORPG game), I would myself prefer the safe messaging than this, but it can work well in some simpler scenarios, when build a safe message server doesn’t worthwhile and latency isn’t a problem.
EDIT: Just to clarify, the above example is just an example of intercommunication using SDs. You can use SDs to communicate between client and server in any way, as an IPC pipe (writable SD pipe vs readable SD pipe).
I could never
figure out if there’s a way to get notified if a certain piece of
Structured Data gets modified (pub/sub type of thing); I wouldn’t think
so, but I may be wrong. Then we’re back to polling, and that adds up to a
decent overhead if we have to do it for multiple pieces of content.
I can’t say for sure, but I recall a conversation with David Irvine in which he said notifications of changes in shared data would be supported. My memory is vague, but that’s my impression.
That’s not correct.
You can have a direct UDP connection between 2 or more people. It would be the exact same speed as a direct UDP connection between the same 2 people on the clearnet. There’s really no extra overhead.
Nothing within the SAFE design forces slow internet. In fact, in nearly all cases, your download speeds for everything (on the network) will be noticeably faster than the clearnet, because of the way you’re requesting one of several copies of a chunk from different vaults. The distributed nature of the routing system demands that files try to be geographcially distributed as well (based on network latencies).
I’ll agree and say that if you tried to move a static gaming server onto SAFE, multiplayer would be dodgy at best. New methodologies have to be designed to take advantage of the full decentralized capabilities of SAFE, and with a high-bandwidth low-latency application like gaming, care must be taken when designing the software that interprets bullet hits or player movement.
Very far future? Any sense of the range of that time scale?
Sorry but you can not. AFAIK the design makes sure that IP addresses are not discoverable. Yes, the nodes are directly connected to each other. But no, you can’t pick a direct communication channel between yourself and a specific node. (Unless you disclose your IP addresses through the SAFE network; but then you lose your privacy, obviously.)
Faster, but not faster It’s the difference between latency (limited by the speed of light) and bandwidth (the proverbial truck full of SSDs).
Nah speed is speed.
Speed = some amount of internet data * time to receive all that data
Nobody cares about km/h or mph It’s all about the data
It’s a freedom tradeoff the user gets to make. They have the option to go as far into any realm of thought. They can choose to be totally private and anonymous, or they can choose to hand over their IP to friends/family to video chat, etc.
But the point remains, which is SAFE is just as fast as the old internet, albeit at some small cost. IP is kind of irrelevant anyway, in the point that as long as the communication itself is encrypted, at least your recorded conversation won’t be interecepted.
I’ll assume you haven’t thought it through … or am I just being trolled?
A simple example for why what you’re saying is terribly incorrect: A phone conversation consumes hardly any bandwidth, but if it the latency (the time for your voice to reach the other party) is more than a second or so, you’ll start cutting each other off because of the delay. Or think about lag in WoW or whatever you kids are playing these days (acting all wise and mature) I’m afraid SAFE will kinda suck for this; the routing overhead will introduce too much latency.
Opposite to that, if you’re downloading the latest Ubuntu installer with bittorrent, you really don’t care if the blocks are arriving in order, or if there are a few seconds between requesting a list of blocks and when you start receiving them. You only care about the bandwidth because what matters is getting the whole stuff as fast as possible. SAFE is absolutely the best for this.
A third case is when you have a stream like in the case of a phone conversation, but delay isn’t that important. That would be YouTube or Netflix. You need to deliver consecutive pieces of the data in order and at a reasonably constant rate. SAFE would be awesome for this.
What I meant though is that the SAFE network does not work in a way that makes real-time low-delay direct communication possible within the confines of the network itself. In other words: no, they do not have that freedom, unless they reach outside the SAFE network, using it only as a signalling channel.
Incorrect. It may be irrelevant for secrecy, but it is crucial for anonymity.
- Fast: No, you can’t confuse latency with bandwidth because you can have both, either, or none, and it will determine the kinds of applications you can enjoy.
- Security: Privacy, anonymity, encryption, etc: they are all very different things, so just because they’re somewhat related, you still can’t substitute one for the other.