why shouldnt the safe network be multi network as in there should be running networks in parallel! then there could be multiple gameplans for those networks. once one network tends to die, there is no problem, the other networks are there to welcome people.
its for making the network have multiple gameplans so in relation to one gameplan when it fails the network starts over, in multiple there is many gameplans and the alternatives already are populated and available.
thats what I was going to say. So why multiple?
again for the availability of multiple gameplans that may be better than one that may fall and safe network would need to reboot from start with a new gameplan
so if I let my thoughts grow on that its better to have a server farm that is a one, is a unit and we should all host our files to one server farm?
I know that you mean that the decentralized network should be one and in unity you got power, but then again in unity there is one gameplan that may fall and then you start over
I would propose (in the scenario that we have a multi network) that there should be a system that if a network is detected to die in a big manner would migrate the data to all the other networks.
this got me thinking about a kind of federated system of multinetworks. there should be multinetorks that communicate with each other (so hosting in one network can be accessed by another network) where there is a system that governs these exchanges so if one network requests a file from another network there should be an exchange of some short. such a system could be used for migrating if there is detection of a network falling and danger of loosing data.
Of course not. To think this is to ignore the fundamental benefits of the safe network
“gameplan” This is an odd thing to be talking about.
Do you see multiple tcp/ip protocols? Just in case one implementation of tcp/ip is better than the 20 other implementations of tcp/ip, just in case one is better than the other by say 0.5%
The testnets are there to hone the best solutions to use and not divide the implementations and the users.
SAFE is a set of protocols. We do not divide the internet protocols into separate implementations and so why should we do that to SAFE. The storage is simple storage.
The real power of SAFE is the applications written to run on SAFE and this is the important part that needs dealing with. Do not make “apple”, “microsoft”, “android”, “nsa”, “blah” networks and divide the users into camps based on the underlying protocols.
it also makes me think about the view of some people that the network will have a gameplan and if people dont like it they leave if they like it they stay and through that choice of people and the gameplan of the network the system balances out.
in the scenario of multinetwork the user just migrates its vault from one network to the other instead of just leaving.
But of course this is what would happen. Just not called by those names or follow their particular bent. It would be different reasons/causes but it would happen. Network A has better youtube than network B and network C has better movies stored on it
They are just two of the many protocols on the internet and combined they make up the internet. SAFE is a combination of protocols running alongside those. You are suggesting having safe-a set of protocols and safe-b protocols and so on
This is just multiple forks of the safe code with variations. And as I pointed out people will tend to use one according to the data stored MORE than the other variations. And rarely on the actual differences in implementation. You guarantee to make SAFE meaningless and irrelevant
I will wait to see the one network that will be released if is gonna be the best it can be, if problems arise I will propose again the multi network idea.
now my concern about a single network that is one gameplan would have some problems in some specific use cases or in random could not work optimaly. lets say there is a usecase that requires quick latency that system could be better on a network that is focused on latency and not so much on storing everything. or a scenario where one dont need latency and instead has lots of data to store and would benefit by a network that stores slowly and make available slowly the data for a cheaper price.
Latency will be determined by the decentralisation and basically is not up to the network since one of the fundamentals is for nodes to be scattered across the world in a non deterministic fashion (near randomly). Inter-node latency will be a lot more than any protocol implementation latency. To reduce latency then you need to Geo-locate nodes and remove multiple goals/fundamentals of the safe network.
But always remember splitting the user base across multiple networks is to reduce adoption of any of them. Also people will gravitate to where the data they want most is. social media is like this, video sites is like this and rather than having the diversity on the one network with multiple applications like the current internet, multiple networks will have apps (using datasets) separated to different networks. Its human nature and we see it on the current internet when looking at social/video groups/sites.
That post means nothing. This is about how people use data. Multiple networks split the data into the different networks since people have to choose which one to store on. Birds of a feather flock together applies here.
The networks then become groups of data and people use the network with most of the data they like.
It is not like hardware parts. No nothing like it. I have designed actual computers (Board level) in the past and protocols are specifically to make specifics of hardware as immaterial as possible. The same too for the safe network, it makes hardware, storage methods, etc as immaterial as possible.
It is the data that is the important commodity and less so the underlying implementation. SAFE has to work efficiently and no good splitting things up as I said above.