Decentralised MMORPG on the SAFE network

The point is that all actions, processes, and their calculations (including physics) of the world need to be handled by one system, which dictates the state of the game world. Those calculations cannot be spread over multiple systems, because when those systems would exchange resulting information (information that again influences the next calculations), the delay in communication (latency) will cause inconsistencies in the states and calculations between those systems, essentially creating multiple different worlds with different states where different things happen.

That one system that handles the calculations and dictates the state of the game world cannot be one of the players, because 1) that player’s actions wouldn’t suffer from latency which gives an unfair advantage and 2) it would allow that player to cheat.

Picking a random system from the SAFE net is not an option either for several reasons.

First, in the case of a real-time game that server system cannot be properly monitored by other systems, since monitoring would mean processing the same things as the server and comparing the results. But since the inputs are slightly different for every monitoring system due to differing latencies, there will never be corresponding results. Also, none can monitor at what time exactly every packet arrived at the server system, so requesting a log won’t help either, since the server can just lie about the log. Because monitoring is not possible, this is actually not a decentralized architecture, we merely used SAFE to pick a random system to serve as the trusted server in what is essentially a classic centralized architecture.

In addition, handling large amounts of players in the same game world location is a very resource intensive task that most regular machines can’t handle. The load increases exponentially for every player that is in the same location, since every player’s actions need to be send to all other players.

The only solution that I currently see to this problem is not continuously processing incoming events from players, but processing them at intervals that are greater than the time it takes for a concensus group to reach a decision plus syncing the new game world state with other participants. That way we can resolve the problem of different systems having different gameworld states due to differences in latency. An incoming action is either processed at interval X or at interval X+1. If this interval time can be sufficiently low (depends on the SAFE benchmarks), it might be possible to still make it feel semi real-time. It won’t be as twitch-based as a true real-time game though.

If we really want a real-time MMORPG, a hybrid model may be a solution. Use SAFE as the database, which handles trade, dynamic world data, etc, and use a centralized dedicated server for all real-time aspects (movement and combat). In terms of performance we’d have the best of both worlds.

The real-time server would cost money though. Yet the usage of SAFE as the database already makes this hybrid model a lot cheaper than a fully centralized model. MMORPG development costs should not be underestimated. I wonder if something like this is best done by a for-profit group or as a community project where individual developers are rewarded for their contributions somehow? If it’s the latter, how could funds be allocated for maintaining the real-time server(s)?

2 Likes

Ok this may be a crazy idea but couldn’t we create like an action blockchain, or series of blockchains, or something. Or link actions to the spending of micropayments of safecoin so that when you performed an action it was either performed or not, and you knew whom it was performed on just like you knew who you would send safecoin to and whether it was spent or not.

1 Like

I like this idea, and obviously a for-profit corp could integrate a centralized dedicated server.

Another thought is that at some point, everyone has talked about integrating a pure computing power component into SAFE. One of the things that I like best about SAFE is the fact that things are set up so that the static data services as you put them are inherently decentralized, and likely to be centralization resistant, due to the nature of data storage as a resource.

But we have also talked about integrating arbitrary computing resources and for something like that I don’t see how that can be centralization resistant. So I would suspect that once that integration takes place, that we will see massively centralized and powerful servers on the SAFE network, because that is the most cost effective way to get computing cycles as a network resource.

So at launch, the hybrid model would be the best, but as we move on to integration of arbitrary computing power, that problem may solve itself.

1 Like

@Blindsite2k Because we don’t know how quickly SAFEcoin transactions will be confirmed we don’t know if this will work. Based on what @Seneca said above realtime game latency CANNOT be more than

without interfering with gameplay.

But we don’t have benchmarks for SAFEcoin transfer confirmations yet. When those come out, if the confirmation speed is under 100ms then we could build a realtime game structure on the SAFE architecture. If its over 100ms then we will need a hybrid solution, or move away from “twitch-base” realtime gaming at least initially.

1 Like

SAFEcoins are intended to be virtually instantaneous, so likely as fast as the fastest software being executed by the network.

The benchmarks to seek out are likely the frequency in order of occurrance by the Vaults.

So, if SAFEcoin is last in line to be processed consistently it will be slower than data of course.

1 Like

Sorry @dallyshalla, I don’t know what “frequency in order of occurrence” means or how its different from a network confirmation of a SAFEcoin transaction?

If there is 100 data requests, and 100 SAFEcoin transactions

clearly they might not take place simultaneously perhaps; though I think that they might be in the same pool of transmissions. So likely scratch this comment;

What I meant was that - you might get data processed before SAFEcoin processed, and therefore a lag could exist based on the order in which one is processing some data, relative to a SAFEcoin plus the number of those transmissions would add to that lag.

Yet I do not think the design has a difference in processing between data and SAFEcoin.

I was talking about combat there, item and currency trading are not nearly as time critical.

They most certainly are:

If I hit with a sword first, I want to be registered first after I hit the button. It is a matter in “life or death” “conquering or being conquered” in battle.

In trading - if I hit the button to buy or sell first I better get it in order that it has been transmitted in actuality, against the forces of network latency.

I know the best of combat will make the best of transaction transmission and vice versa. Perhaps combat is usually more complex.

1 Like

yeah won’t this whole decentralized MMO work once SAFE has distributed computation?

Then it will have the massive storage capabilities to host all the world data, and the massive computation capabilities to run all the insanely demanding processing of the real-world time events like shooting etc.

Won’t this work?

Remember that they ARE planning to add in distributed computation after a while!!

2 Likes

What I meant is that it’s okay if trades take a second to be handled by the network, at least as long as it’s about the same for everyone. It doesn’t really matter whether you receive your Sword of a Thousand Truths 20 ms or 2000 ms after clicking the “buy” button in an in-game auction house.

Most of the discussion of the past few days has been with distributed computation in mind. Feel free to read up on it if you want my view on the challanges that we face.

@Seneca, I think @dallyshalla means in real world trading, like a boiler room stock exchange set up, where people set up bots to buy and sell according to pre-determined settings.

So say you are dealing with a stock which isn’t very liquid, and you want your bot to buy it, and then need to be able to turn around and sell it to 3rd parties very very quickly. Latency becomes an issue in these scenarios as well.

2 Likes

Anyway, it might be interesting to discuss development strategies and finances. I think usage of the SAFE network opens up new possibilities in those areas as well. Perhaps we can work out a new way in which a group without a lot of starting capital but with a solid game vision could be enabled to develop that game? Perhaps by selling shares or certain in-game assets (items or game currency) as opposed to a kickstarter?

1 Like

As far as development strategies, my first thought is to really hone in on what the SAFE network does that other networks cannot. We were talking about static data earlier, and one of the things about the SAFE network is it ability to store truly massive amounts of unique static data.

So one of my thoughts would be to have things based on unique data structures (compared by hashes). That is that the game world is defined in a decentralized way in terms of unique data. So every single block of gamespace whether its an acre or a square mile, is composed in a unique way. It may be very similar to other blocks, but cannot be identical, because then the hashes would be identical.
No two items can have EXACTLY the same effect, or power.
No two monsters or characters can be identical etc. And we could get more or less and granular about what different sections are hashed and therefore must be distinct.

The great thing about this is that it invites the community to be involved in world-building (again at whatever granularity the dev desire), and allows the dev’s to reward people with items or skills or whatever, which by DEFINITION must be unique.

1 Like

Most important thing is the order of processing… So like a board game is easy or a RTS game is easy to implement. Though with avatars running around and slash and cashing here and there; this could get annoying if it is too slow :wink:

If I push my button at 12:01:01 and someone else did at 12:01:02 that ought to be processed in the correct order. How to know that order with network lag heh

1 Like

Are we stuck with SETI level distributed computing? Why can’t signals propagate through a mesh much faster than we are used to supposing? PCell gets it going one way and that may be due in part to the nature of its wired back channel. I suspect that conceptually without a wired back channel even with sub 1 ms latency nodes that can connect at a mile’s distance one might need 500 hops to cover 250 miles and that might well add 250 ms latency. And this issue might be part of what drives the PCell design but under such parameters a large city might still be covered with sub 100ms latency.

The bigger problem is that without servers and under a distributed model it seems that compute wise, such distances yield glacial times. Could a distributed graphics engine run across these spans? Maybe a truly distributed compute system that we can control awaits instantaneous q communication. Short of this, for real time purpsoes, we would still be stuck with servers. Its kind of hard to get rid of some centralization or channeling, look at the layout neurons and of the human central nervous system. Even jellyfish have some.

1 Like

Porn and gaming: the driving forces of the technological industry.

1 Like

http://n-o-d-e.net/post/105534139006/sanctuary-update-technical-outline-for

So this post made me think of this conversation again.

Imagine the VR world is on a grid and we give each block of the grid a name composed of a prefix for the world’s name and that grid point’s x and y coordinates e.g. name=”sanctuary”, x=1, y = 2 → “sanctuary 1,2”. When your client appears on the grid, it looks up nearby grid locations on namecoin and fetches the values for them. These containthe models(which could be values or torrent refs, etc) or belonging in those locations along with their position, orientation and scale data. The model data could also contain sets of small named JS programs for interaction — including a timer() for animation/simulation.

I just think that this sort of thing could be done so much more simply on SAFE and you would have no need to merge multiple systems to do it.

1 Like

The world data is not the problem, it’s not as latency sensitive. It’s about the character (‘avatar’ in that article) updates. The linked article plans to make use of an authoritative node, which essentially functions as a server for a smaller region:

If a grid entry references an authoritative node to talk directly to for fast updates from around it’s region, the client can connect to it for very low latency updates of avatars in that region.

This implicates a trusted node, which SAFE does not make use of. My posts in this thread were mostly about pointing out the problems of monitoring a low latency server-node to prevent abuse of power (cheating). The article does not mention why this authoritative node is allowed to be authoritative. If it’s a server owned by a trusted party like the development company, this is actually a hybrid model as well.

Also, using region server-nodes doesn’t increase the ability to handle lots of players in the same space.