The other day while listening to a Q & A video with Mr. Irvine, he mentioned something to the effect that the network would become faster as it handles more requests, if I’ve got it right.
Simultaneously I’ve been thinking about how to allow a SAFE client app to listen to real-time broadcasted data, similarly as it would in a central server-client paradigm.
Especially when MutableData is in effect, perhaps nodes, say in a large chat room, when posting messages in a large chatroom, those messages can be made as entries to the room’s MutableData handle.
if GET requests are not expensive on SAFE as they normally are on central servers, perhaps clients could simply make perpetual GET requests to listen for changes to the room’s MutableData.
Maximum entries per MutableData must be 100;
Not more than 5 simultaneous mutation requests are allowed for a (MutableData data identifier + type tag) pair;
Could it be as simple as that?
I definitely want to play around with this setup.
Well for live broadcasts using MutableData (MD) you would not need to “constantly” poll as that would be wasteful. The principle is that the broadcast is written in blocks, rather like the packets sent for any stream. Then the player gets the next “block” from the MD when its ready.
Basically the player will be a few blocks behind to give some buffering. Typically this would be some milliSecs of time, I just don’t know what amount of time is currently used.
The player starts playing once it has enough blocks, and when each (set of) block(s) is played it prefetches another (set of) block(s) to keep the buffer full. The next (set of) block(s) should be there since the broadcast is real time and being added. If the block(s) are not there then it simply checks again in a preset time so as not to be “constantly” polling
If the supplier of the stream is good then the blocks will always be there and no extra “polling” GETs will be done. If not perfect then there will be a certain percentage of “Polling” GETs. But these polling GETs only occur when there are delays adding the blocks to the MDs and since the polling GET is done at approx the rate the blocks should have been added the extra GETs will not be excessive.
The key is that there are a number of blocks buffered before playing and the buffer is maintained. Modern phones are like this and if you call yourself from one phone to another the delay is quite large, yet you don’t notice it when talking to another party
EDIT: It may prove to be a reasonable large delay on the SAFEnetwork because of the routing.
This sounds contradictory to me, it seems you think “constantly” implies high frequency
@hunterlester, you could look at HLS format if you want to implement something like what @neo is explaining here, and you can find plenty of HLS movies out there to test with.
Caution, I have no idea what I am talking about. But I thought I read somewhere here a while back something about using Tokio and Futures in Core. And I just came upon this yesterday: https://tokio.rs/blog/tokio-0-1/ and theres some things about streaming/stuff like that in there.
Really just trying to make a distinction between performing GETs when the data is expected to be there and what I thought the OP might be suggesting and that was GETs much more often.
For now I’ve set request interval to 5 seconds. Chat messages clear out and rerender upon fetching new data. Reason being that when a new message is appended to the core AppendableData it’s not simply pushed on to a vector. My client cannot simply fetch the ID handle where index is equal to the new data length, because the ID handle at that index is not the latest appended ImmutableData.
What about a system that simply pushes new messages to the front / displays them when they are entered?
I have not started using AD or SD yet so it may not make sense, but just pushing the message to the “top of the queue” to be displayed right after it’s typed in?
That’s how my current SecretChat app works. Makes a GET request right after any POST.
The problem is making a history box, but I think it might be easy. I think I’ll just change it to a AD that keeps track of all previous messages, and displays each one forever in order.
Checking fetched AD indexes against current array of messages and only pushing messages that don’t exist in client state. No longer rerenders all messages.
Now I need to fiddle around with GET request interval.
@whiteoutmashups If you have time, I’d like to see if this thing works and I can see newly posted messages from other users in my client and vice-versa.
I was under the impression that high scale requests wouldn’t be burden to the network in the same way that they would be for a central server. That impression came from a Q&A video that I watched where an audience member asked about denial of service type attacks.
Where can I learn more about how that works?
Is this the case? Given that impression, why would polling be a burden on the network?
While that is true it is still a increased load if you do say 100 GETs for every required GET. You wouldn’t want people watching say the Olympics (100+ million) testing out the capabilities to resist a DDOS. Caching is one of the reasons for this resistance to DDOS where every node is potentially a server, but to multiply the GETs just to get instant updates seems unnecessary and would be a real test for something like the Olympics where 100+ million people are watching the opening ceremony.
Static content would simply end up having the closest nodes resending the cached content they have. But live streaming requires looking for updated content and caching is not quite as effective and this is where the multiplication of GETs, if done on a large scale would be noticeable.
Good programming practices would be that you only do as many GETs as would be required to perform the task at hand. No need to check every 1 millisecond for a updated streaming/chat/post. People simply do not read/watch that fast.
Naive caching, though perfect for immutable (i.e. content addressed) data, can’t help with polling: either you get (potentially) stale data, or you have to bypass the cache.
I think when @dirvine says deterministic caching it means that changes to an item would ripple through to all nodes that have a cached copy of it (selecting the cache nodes deterministically is one way to make that feasible), so we would have an arbitrarily large set of nodes to make up an eventually consistent cache for the item in question. If one subscribed to push notifications, this would be handled by the closest node that is part of the cache for this particular item.