Alternatives to websocket protocol/ listening to broadcasts

The other day while listening to a Q & A video with Mr. Irvine, he mentioned something to the effect that the network would become faster as it handles more requests, if I’ve got it right.

Simultaneously I’ve been thinking about how to allow a SAFE client app to listen to real-time broadcasted data, similarly as it would in a central server-client paradigm.

Especially when MutableData is in effect, perhaps nodes, say in a large chat room, when posting messages in a large chatroom, those messages can be made as entries to the room’s MutableData handle.

if GET requests are not expensive on SAFE as they normally are on central servers, perhaps clients could simply make perpetual GET requests to listen for changes to the room’s MutableData.

I see the limitations of MutableData here:

Maximum entries per MutableData must be 100;
Not more than 5 simultaneous mutation requests are allowed for a (MutableData data identifier + type tag) pair;

Could it be as simple as that?
I definitely want to play around with this setup.


At the moment there is no push from the network you could use, so polling is the only option for now.

This was mentioned in this other discussion, but it is just an initial limitation which would be removed later on.


Well for live broadcasts using MutableData (MD) you would not need to “constantly” poll as that would be wasteful. The principle is that the broadcast is written in blocks, rather like the packets sent for any stream. Then the player gets the next “block” from the MD when its ready.

Basically the player will be a few blocks behind to give some buffering. Typically this would be some milliSecs of time, I just don’t know what amount of time is currently used.

The player starts playing once it has enough blocks, and when each (set of) block(s) is played it prefetches another (set of) block(s) to keep the buffer full. The next (set of) block(s) should be there since the broadcast is real time and being added. If the block(s) are not there then it simply checks again in a preset time so as not to be “constantly” polling

If the supplier of the stream is good then the blocks will always be there and no extra “polling” GETs will be done. If not perfect then there will be a certain percentage of “Polling” GETs. But these polling GETs only occur when there are delays adding the blocks to the MDs and since the polling GET is done at approx the rate the blocks should have been added the extra GETs will not be excessive.

The key is that there are a number of blocks buffered before playing and the buffer is maintained. Modern phones are like this and if you call yourself from one phone to another the delay is quite large, yet you don’t notice it when talking to another party

EDIT: It may prove to be a reasonable large delay on the SAFEnetwork because of the routing.


This sounds contradictory to me, it seems you think “constantly” implies high frequency :slight_smile:

@hunterlester, you could look at HLS format if you want to implement something like what @neo is explaining here, and you can find plenty of HLS movies out there to test with.

1 Like

Caution, I have no idea what I am talking about. But I thought I read somewhere here a while back something about using Tokio and Futures in Core. And I just came upon this yesterday: and theres some things about streaming/stuff like that in there.

Might that be related?

1 Like

Really just trying to make a distinction between performing GETs when the data is expected to be there and what I thought the OP might be suggesting and that was GETs much more often.

1 Like

Yes, I was suggesting high frequency GET's. I’m experimenting with a little chat app right now in order to see how this all works out.

In the case of a chat room where the rate of updated data is less predictable than a streaming movie, how does the client know when to make requests?

I suppose the program could calculate a running average of chat room activity and change request intervals based on that average.

@aenemic Thank you for posting the link to Tokio. Good good learning for me. :smiley:


Latest updates on my chat experiment:

It’s functioning in a hilarious manner. :laughing:

For now I’ve set request interval to 5 seconds. Chat messages clear out and rerender upon fetching new data. Reason being that when a new message is appended to the core AppendableData it’s not simply pushed on to a vector. My client cannot simply fetch the ID handle where index is equal to the new data length, because the ID handle at that index is not the latest appended ImmutableData.

cc: @whiteoutmashups


What about a system that simply pushes new messages to the front / displays them when they are entered?

I have not started using AD or SD yet so it may not make sense, but just pushing the message to the “top of the queue” to be displayed right after it’s typed in?

That’s how my current SecretChat app works. Makes a GET request right after any POST.

The problem is making a history box, but I think it might be easy. I think I’ll just change it to a AD that keeps track of all previous messages, and displays each one forever in order.

1 Like

Found a solution:

Checking fetched AD indexes against current array of messages and only pushing messages that don’t exist in client state. No longer rerenders all messages.

Now I need to fiddle around with GET request interval.

@whiteoutmashups If you have time, I’d like to see if this thing works and I can see newly posted messages from other users in my client and vice-versa.

1 Like

@dirvine hinted on implementing push notifications somewhere along the road:


I was under the impression that high scale requests wouldn’t be burden to the network in the same way that they would be for a central server. That impression came from a Q&A video that I watched where an audience member asked about denial of service type attacks.

Where can I learn more about how that works?
Is this the case? Given that impression, why would polling be a burden on the network?

Thank you for linking to that thread! :relaxed:


While that is true it is still a increased load if you do say 100 GETs for every required GET. You wouldn’t want people watching say the Olympics (100+ million) testing out the capabilities to resist a DDOS. Caching is one of the reasons for this resistance to DDOS where every node is potentially a server, but to multiply the GETs just to get instant updates seems unnecessary and would be a real test for something like the Olympics where 100+ million people are watching the opening ceremony.

Static content would simply end up having the closest nodes resending the cached content they have. But live streaming requires looking for updated content and caching is not quite as effective and this is where the multiplication of GETs, if done on a large scale would be noticeable.

Good programming practices would be that you only do as many GETs as would be required to perform the task at hand. No need to check every 1 millisecond for a updated streaming/chat/post. People simply do not read/watch that fast.


Yes this is what I was trying to say.

Also, since the new messages are coming from your own app in the first place, you can leverage that and only do updates when new messages are created :slight_smile:

Naive caching, though perfect for immutable (i.e. content addressed) data, can’t help with polling: either you get (potentially) stale data, or you have to bypass the cache.

I think when @dirvine says deterministic caching it means that changes to an item would ripple through to all nodes that have a cached copy of it (selecting the cache nodes deterministically is one way to make that feasible), so we would have an arbitrarily large set of nodes to make up an eventually consistent cache for the item in question. If one subscribed to push notifications, this would be handled by the closest node that is part of the cache for this particular item.