No, disagreed. That is simple byte transfer, encryption, routing, etc. Video streaming is what is done with the bytes afterwards.
HTTP streaming is the one I was just mentioning because it supports the most things. It can be anything. Pick your favorite streaming format and code can be written to transfer a safe byte stream to that. The bytes come a chunk at a time of course, but will appear to stream.
So that’s a web client thing by the look of that image, which is the upper layers outside of SAFE “a simple byte transfer” also?
Great, now let’s say we have the SafeOS and/or the SAFEbrowser…what are we looking at? Are we still using WebRTC or do we also have a video camera that is streaming bytes straight out of a RJ45 jack. Does the camera run Linux and have a ‘Vault’ in it?
Dont we just ditch http,ftp,smtp…? I’m struggling to understand what protocols were bringing with us into the SAFE world, what were leaving behind and in particular what the SAFE stack really enables.
Maybe it does come down to bytes in/ bytes out, use your imagination, anything is possible. And if thats the case the machines on the end points become greatly simplified it would seem.
Ok, this changes things a bit and is quite difficult. Safe is more about storage than data broadcast, but it can be done without disturbing the network I believe. Anyone else reading this though should note that streaming existing video/audio is much simpler. But for this part I’ll just dump a stream-of-consciousness about “live” streaming (sorry, I am a bit verbose)…
I am very familiar and definitely is NOT the next thing for what you want. Firstly, WebRTC is point-to-point. Want to broadcast to 50 people, I hope your upload speed is up to par. Also, it should be noted that WebRTC has data channels which is akin to what I’m talking about here (sure they have a media source too, but the underlying protocol is just as I mentioned, just bytes). Also, while WebRTC is encrypted, it is not anonymous (especially if you use turn/stun for nat busting where you have to trust someone).
Please don’t look at that image, that is not broadcasting. Sure you could leverage webtorrent or whatever, but again you lose anonymity and under the hood you are obtaining just ordered chunks of bytes bittorrent style. Think of putting data on safenet the same as putting data in a torrent stream (even though they don’t really “stream” live per se).
There are not a lot of distributed live streaming technologies out there, but P2PTV is one. It basically works like bittorrent and in safe’s case it works in a similar way where the more people requesting a chunk then the more places it’s copied IIRC. I assume safe is just like bittorrent where to find a set of data you have to hit the DHT for peers. Yes, you may have to wait a bit for it to buffer enough to start.
For any distributed live stream to work, you have two major things IMO: 1. You must buffer some bit 2. the download bytes/sec must be no slower than the playback bitrate for reasonable playback. Some people receive data faster than others and this is where the “end points” actually get greatly complicated. It’s the client that must decide whether what bitrate/speed/etc it can accept, not the broadcaster. Luckily some protocols have this built in (known as “adaptive bitrate”), we definitely wouldn’t start all of this work all over.
If I built a live streaming app, here’s how I would build it. The “broadcaster” app would ask for a stream and output options (e.g. WebM/VP9 + 300k/sec + 720x486 and MP4 + 1500k/sec + 1280x720…I dunno, made these up in my head). Then I would write some structured data somewhere basically constantly updating the latest known chunks out there for the different formats (this is the way that directory metadata is written in safe). Then I’d take your stream on your client (depending on several factors of how to ingest it) and probably feed it through ffmpeg to get to your output and chunk it and store a new self-encrypted chunk every ten seconds or so (all the while updating the metadata).
The client obtains this metadata regularly to get the hashes for the chunks as they come in and show them in your player of choice. I mentioned HLS/DASH before because they have solved the problem of switching bitrates, chooseing streams based on client capabilities, and properly buffering.
Now, there will have to be tests on file discovery speed and file distribution speed, especially with encryption. We may end up with very compressed video this way I’m afraid. Another problem with this is the expensive cost of such a thing from a safecoin perspective. Really we just have to test to see how long/distributed the data becomes, how long it takes to rechunk and put it back together for many clients, how it scales, and how much it costs to constantly be writing things on the network. Even though the live streaming data may appear ephemeral to you and I, it’s there basically forever (don’t fool yourself, “delete” just changes directory metadata). But that provides built in recording
Finally, since the data is shared, it can’t be encrypted for each viewer easily, but I trust the chunks are chunked themselves by safe and encrypted at rest even though they are essentially public data. I hope that provides each farmer/user some plausible deniability even for “public” data. Similarly I trust that it would not be clear to a sniffer what data hashes you are requesting (which gives away what you’re watching).
Sorry, I was just rambling here, but this is how I’d do it…
Wow, thanks for the input, some great stuff in there.
Yes… live anything really, games, midi, music recording…that’s what made me wonder if the capability is in the core (ready to be leveraged via the API) or required more R&D like a native search function.
I guess the big change (like p2ptv) is not requiring POP’s in Data Centers to make it happen. When I did the research on Video CDN’s to stream live broadcast in Australia, I quickly found that some of the cheaper options would require a round trip to Asia or the USA for a user in Australia to view.
I would have thought latency of the chunks would be important for streaming, but in most discussion here it’s said chunks are not coming from physical closeness (latency) but from XOR space…that one seems to defy physics when put in the context of streaming live…
I believe there was a comment somewhere here that mentioned the following:
Crowdfund someone to live-upload a world-cup match. Then anyone who wanted to watch it could GET it from the Network. Sure there’d be a delay, but that’d be a half a minute tops - less with caching. Perfectly fine for someone at home or even small bars streaming HTPC-style.
Basically dumbing down @cretz’s incredible post (which I thoroughly enjoyed BTW!)
Just caught https://www.oddnetworks.com/ on HN. I am going to dig a bit, but they might provide a nice abstraction over all of the frontend quirks I was planning on implementing myself. I will report back with whether this is a worthy media frontend to safenet.
Note I also opened an RFC for RFC - Direct Data API in Launcher to allow more direct data access to help this…but I think I need to go back and ask them to stop encrypting the data on local-to-local communication because it may slow things down. We’ll see…I may just have to write it in Rust and access things directly.
The Oddworks Platform is written for the Node.js runtime, and uses the well known Express.js framework for HTTP communication.
Oddworks is designed to be database agnostic so long as the underlying database can support JSON document storage, including some RDMSs like PostgreSQL. Currently the only supported and tested database is MongoDB.
Although communication between the devices and the REST API is typically done in a synchronous way, the inner guts of the system is designed to communicate via asynchronous message passing. This makes it easier to extend the platform with plugins and other loosely coupled modules without worrying about upstream changes like you would in tightly coupled platforms.
The Oddworks Platform consists of two main concepts:
The Oddworks Content Server which maintains a database of your content and provides it to your apps via a strictly specified JSON API.
The Oddworks Device SDKs which are designed to consume the content from the Oddworks Content Server as well as report usage data back to it.