Even if the servers ( special clients) belong to a unique logical group, their SAFE addresses would place them into various SAFE groups where they wouldnât be able to make decisions based on their logical group purpose (aka the application). They must and can function only based on ways accepted by their local SafeNet groups (and group leaders) so I doubt that this could work.
They canât have their own âappâ mind. Each must behave accessing to the group in which there are regular SAFE consensus rules. If some way of addressing and processing data isnât available to the network in general, it canât be available to a group of servers running an app either. It would be weird (the network would be dysfunctional) if someone could put together an app that would let the servers read and write data to SAFE as they please).
Yes. Though, instead of random selection, one could pick the one with the address closest to their own in XOR space (for quicker lookup.)
Exactly!!!
âwithin the groupâ => Iâm talking about organizing a âserver farmâ operated by a single owner (or a trusted group), who ALREADY KNOWS all the IP addresses. In that context, it is most certainly not important.
The SAFE network is trying to take out the server from the equation, which is very possible (and desirable) in the majority of the cases. I started this thread about a simple workaround that could âbring back the serverâ in the few cases when we need it.
If by âsystemâ and ânetworkâ you mean the SAFE network, then no: this architecture does not require anything new to be built into the system, itâs just uses existing components. Yes, using those components this way would indeed expose the service itself to attacks, but whether one is willing to take that risk is the personal choice of the developer. There may be things that are just not possible / feasible / simple enough to implement serverless.
But if you replace âdeveloperâ with âattackerâ then the idea doesnât sound so attractive, thatâs my point. Iâm not concerned that a dev would screw up and lose his data or app, but that the same mechanism could be used to overwhelm specific groups (Sybil attack) and rob everyone in them blind.
By design the randomness in XOR address space is the key security feature.
Heâs not talking about choosing your XOR address, heâs talking about starting a few nodes that will be randomly spread on the XOR space and allowing users to direct their message to the address closest to their own position in the XOR space. The goal is to reduce the amount of bounce they need to talk to the server. Thereâs no new feature.
Then the app devs uses a known SD to list the current XOR address for the users to choose from.
Maybe I wasnât clear. The service is an individual app, run by a single operator. You can run a DDoS against the nodes they operate. People wonât be able to access it, but thatâs about it; it wonât mess up anything for anybody else (i.e. to the general users of the network.)
Again: Nothing about how the network works is âredesignedâ or âalteredâ â all Iâm talking about is a pointer to a set of nodesâŠ
Hmmm, how does that help? You still need to either wait until at least one replica copy is made.
And if you do that you have to give up caching because noone except that one node has the latest data.
Actually, that wasnât the goal, but that would be an obvious implementation decision from the clientâs point of view.
This is the idea in a nutshell. A set of servers, running on the SAFE network, serving dynamic content.
No, weâre not talking about blocks and copies and caching. Weâre talking about direct messages to a server, like how youâre communicating with a web server. For the few cases when itâs easier to use a âclassicâ client/server architecture compared to the awesomeness that the SAFE network provides.
I think what is confusing is that while there are plan for direct(through the XOR space) messaging between two nodes the feature isnât yet done so when people think about data they think in term of data saved on Safe.
Yes, you use a structured data to list the nodes where the server can be contacted. Itâs a phone book.
My concern with this though is the amount of bandwidth that a node will have to deal with if it becomes a popular feature. On the clearnet, personal computer arenât use to transmit the data of others but with this, everyone is transmitting everyoneâs data. Itâs a clear disadvantages of using XOR space instead of IP.
That too. If one client knew the IP, all clients would know the IP, so rate limiting, DDoS, firewalls, CDN and everything else becomes a necessary part of this solution.
Isnât this the same thing that I face even now when my website, âhostedâ from my laptop in my bedroom, gets popular? Iâll just move the service to a server on AWS or Dreamhost or whatnot.
Again, none of it is new, and none of it has been a deal breaker for the internet: surprise surprise, people actually run servers there!
The only difference is that while itâs the standard mode of operation on the current internet, it would been a minority use case for the SAFE network.
For the person running the server, yes, itâs the same but thatâs not what I meant. What I meant is the extra overhead on the network itself. When people start running server behind an address in the xor space all the communication with that server needs to travel through the xor space. This means that each nodes along the way will see an increase in traffic on their machine.
With IP we donât have this problem since the data is routed around by specialized router, but in XOR space there is ânoâ router, itâs the node that does the job. So by adding server behind nodes we increases the load on the network. Imagnie if tomorrow everybody uses Safe and all servers are only accessible through the XOR space. It would be highly inefficient and it would limit drastically the type of device that can afford to run a node.
With that said. Since this is not a new feature (well besides the messaging that isnât done yet) Itâs gonna happen. Iâm just pointing out the obvious outcome.
As I understand, the path (in XOR space) between two nodes depend on both of their addresses. In effect, every node will have a completely unique path to that one machine. In other words, there will be no âalong the wayâ because there will be no common nodes in the routes (other than by accident).
I understand that, I was talking in the general sense. The more server will open on the network the more the load will be on individual nodes to pass the traffic around. Iâm not trying to say itâs a bad idea or anything, just thinking out loud of the implication for the network when more people start doing that.
Yea, I see what you mean now; yes, if this use case would grow dominant, that could happen. Fortunately, it will almost never be necessary to use, so Iâm sure developers will try to avoid it in favor of the faster and more robust serverless model Itâs only an âif all else failsâ last resort.