If the future of the internet will be metaverse engines then will safe network be able to run a decentralised metaverse engine?
It is able to host (the code) for one and store it’s data for sure. It can’t run one though as there is no compute backend and even if that comes down the line, it most likely would not be the most efficient use of that resource.
To have a decentralized metaverse I suspect running the code locally is going to be the only way to do it.
SAFE would be great for storing content for the metaverse.
Consensus on SAFE won’t be the fastest though, so I’m not sure whether or not it can be used for consensus of the state of a game/vr world. Perhaps each client could calculate everything by itself, but then synchronize the state every second or so with SAFE. Otherwise SAFE could be more of a storage layer while the consensus of the state of a certain area would be done peer to peer by each client in the same area, but then again you’d need some “trusted” peers to prevent cheating in games.
Would be cool to have decentralized data streams, everybody has the data of the game and streams their position and actions in real time also receiving those of others. Then by receiving two or more streams of others with same plausible data it become a truth.
Real-time streaming will be a BIG thing imo. Requires least amount transmitted, so client side handles 99% and validates messages, then rebroadcasts it back.
The possibilities here are wide open. There could be a whole range of interfaces built by anyone to represent the data (of the matrix!) lol … so you could pick the one that suits you.
Do you mean streaming as in a broadcast or youtube live stream? Or something else?
Live streaming of an event is as simple as
- the generator writes the stream to Safe as if any video in that format.
- the receivers/viewers are reading each block and if next block not there yet, the receiver/viewer waits.
The viewer can get ahead of the generator if viewing at faster speed, thus the need to wait on unwritten blocks.
- any video player will play/view the live stream since it will appear as any video and only requires it to know to retry when blocks are not yet written.
- viewers can rewind, skip etc any of the stream already written.
- massively watched live streams will be helped by caching allowing better performance than other platforms.
- It is possible that live streams may require another level of data and that being indexing so that the viewer has a predictable place to look for next block. IE a type of appendable data element that each entry is a block pointer and thus before the data map is written the player can use it instead.
BUT it is possible that this could simply be the datamap as it is written.
Check out Jack-Trip, they are now even selling hardware and since Covid have made such low latency shared live performances a lot more user friendly.
The same needs to be done for video that JackTrip has done for audio. A hardware standard and protocol to minimize latency is what is needed. WeRTC was supported over the network at one time where a peer could connect directly to a peer and so other such things can be supported to. Just blindly putting something so complex on top of SN probably wouldn’t work great, I don’t think. There is so much specialty in content delivery networks etc so it’s a complicated question. I think a decentralized compute layer where latency was taken into deep consideration while also not sacrificing too much privacy/security then that would be a massive step in the direction of anything being possible.