CryptoBeadles interview of Dug Campbell


SAFE Network Dev Update - January 17, 2019

I love when my day begins with Dug! :love_letter::boxing_glove::love_letter: :wink:


Make sure to give the video a thumbs up, comment, share, etc. Safe Network fans can help A LOT to bring more attention to the project if we participate in the marketing.


Great Video. I hope the Marketing Department and Designers were listening when Cryptobeadle said “people just want to push a button and get going…”. Anything other than 1 or 2 steps and people like me will lose interest


Good video, would like more of use cases beeing covered though, remember hearing from David during Devcon about computer, mobile beeing just an device with no personal data on it, so if it get stolen, broken etc. you just go buy a new one and login to SAFE and all the personal data is there, this blown my mind. Would be good to share during the interviews maybe more use cases which show the potential that this product will give to the people.


Haha, thanks for the support @dimitar! :+1: All good points re UX & use cases - hopefully we’ll get the chance to go into a bit more detail in different interviews in the future - but any other suggestions about content that you’d want to see in this sort of video if you were coming across the project for the first time on YouTube etc, just shout - feedback definitely helps!


Brilliant interview @dugcampbell All of us dev folk do feel your pain. We won’t be long man, we won’t be long, still more to be done, but if we get routing fixed properly then we have crossed the biggest hill we have. Roll on Fleming, that’s is my current fixed point to reach.


Yes, very nice - clear and persuasive. Good interviewer too (not come across Cryptobeadle before). Nice to have a host who actually listens to what you’re saying…


You got me worried now David. Did you mean “when we get routing fixed…:rofl::joy:” or “if we get routing fixed…”:sunglasses::sunglasses:


@Cryptoskeptic David might be referring to this. A good catch! I wonder if they caught
it because they’ve expanded beyond soak tests? @pierrechevalier83? Either way I bet it’s one of the last real hang ups but reeeeally fortunate it happened now rather than later. :slightly_smiling_face:


Ah yes, cheers for the pick up, you are spot on, it is when :smiley: :smiley:


I think David was just referring to our work on finalising the overall routing design.

We caught this malice opportunity as we were looking at implementation of a related task regarding spam malice.


Finally had the chance to watch this during lunch today, really great interview! :+1: My biggest takeaway is that I need to visit Scotland :smiley:


Well I thoroughly recommend it, but I must say I’ve never been to that Glass-Cow place he mentions :wink: . I’ll have to look it up.


I thought this was a good interview. I’m just wondering about Dug’s question about the speed of the network, which wasn’t quiet explained. He had a great point: that if people must sit and wait for a video to load, mass adoption will be difficult. Is it not that if a video becomes popular then more of that data chunk is created, therfore lending to greater download speeds?


in theory that is how it should be. Lately I saw an commit in github where the Parsec scalability was tested and that was not the case… the speed was getting slower as more nodes were connected. Maybe someone from the Routing can confirm/deny this ?


I don’t think this is related, PARSEC deals specifically in vote ordering for consensus whereas producing replicant copies of data (Opportunistic Caching) is handled by Vaults. But agree that someone more technical than I should confirm.

Agree with your point @Michael_Hills that latency is a big issue, the network needs to be better than what we have today to push people through the inconvenience of switching.


AFAIK PARSEC is the only ABFT that is testing permission-less open source algorithms and doing so methodically. As more nodes connect it will be slower (obviously) the key is making the slowdown less than linear. So you will see many tests in this area as well as some others on the side relying more heavily on node age and data chains. All together these algorithms can produce something quite remarkable. Sometimes we will push tests we know are slower to confirm our maths. that is all cool though and good.


Sounds very interesting. I know that you will make it work, no doubt in my mind.


Yes, caching.

But way more important is that the first chunk might take just under a second to arrive, the rest that are being buffered up will arrive micro/milli seconds later. It can be parallel retrieval.

On a typical server (like you tube) the blocks all come from a server one after the other and buffering has to be reasonably large in an attempt to have smooth video. If the server becomes loaded then the vid stops and starts.

But on safe buffering should be able to be smaller since each chunk should arrive with only a smallish lag time and so a number of seconds buffering should be enough, but the user’s ISP could be loaded too, so it needs to be more. Unlike the server model SAFE allows parallel access of the chunks which come via different routes thus eliminating the problem of server delays.

I theory you could ask for 1000 chunks and they will all arrive at the ISP’s routers at around the same time and the delay is your link speed.

Even IF that was a “problem”, for a video the chunks are coming from many sections so parallelism is still at play. So if the slow down is 10% when adding 25% more nodes then for a video playing this will not affect the video playing, it will still have full buffers.