The SAFE Network Primer: An Introduction to the SAFE Network


@JPL can you give me the InDesign files, please. I made a translation into Bulgarian and I want to publish it. :slight_smile:


The SAFE Network Primer is now available on the SAFE Network, as it should be, with all the above correx incorporated. Thanks all for the input.


I had fun putting this together (and revisiting long lost and very rusty HTML/CSS ‘skills’ in the process) using @Shane’s fantastic SAFE-CMS. Seriously if you haven’t tried it yet give it a go. The first ‘official’ version will be launched on 11 March but it’s already pretty much there. It’s a super slick app and makes the whole process of creating a blog or safe:// website so much easier. :+1:


Very, very nice! And good looking markup too. :wink: Thanks!


this is the translation into Bulgarian of the “The SAFE Network Primer: An Introduction to the SAFE Network!” Thanks to @jpl and @polpolrene!ръководство-BG.pdf


Really pleased you’ve done this. Thanks a lot @dimitar :slightly_smiling_face:


Oookay - I know this is kind of embarrassing… but since I didn’t find the time sooner to read through the primer I had the pleasure to do it this weekend on the safe network itself! Thanks a lot for this! :heart_eyes:


this is the translation into Spanish of the “The SAFE Network Primer: An Introduction to the SAFE Network!” May I ask someone whose first language is Spanish to check it out?


I am confused by the following mentioned in the primer:
It says a chunk is encrypted using the hash of “another” part of the file, as in, not the hash of itself…
So how is opportunistic caching ever possible? Either it has to be 100% the exact same file, or different common “chunks” are going to need to be exploratorily calculated from what you can already find in the network.

For example, say a bunch of text configuration files all contain about 1mb of common stuff:
(different stuff)
(1mb common stuff)
(different stuff)

For there to be any opportunistic caching, the algorithm to chunk things up will have to align that 1mb chunk differently for different sources and see that chunk is already on the network to take advantage of it.

So any opportunistic caching at all sounds a little far fetched and intensive.

(I think I might be confusing the term opportunistic caching with something else, whatever reduces data redundancy by reusing shared chunks between files?)


Self encryption will compress and then encrypt this (it won’t be similar at all pre encrypt).

The de-duplication (I think you mean) is where the network see’s the chunk is already stored and does not need to store again. As chunks are named with the hash of their content then we can be sure (minus collisions) that the chunk is identical and securely so.

Opportunistic caching is where nodes keep a copy of any chunk they see in a first in first out (FIFO) memory pool. SO if a chunk is requested again, it may exist en-route to the real chunk location and be returned early.


Just to add to @dirvine and perhaps clearup what seems to be a misunderstanding

self-encryption will only produce the same chunk for “1Mb common stuff” if the previous stuff is exactly the same. Self encryption uses the previous stuff to encrypt following parts of the file.

So in order to have the exact same chunk then the file being self-encrypted has to be the exact same upto that data as another file that was stored. Caching is only using the actual chunk (still encrypted when cached) Thus only the hash of that (encrypted) chunk is needed to identify a match.