Thanks for the great overview.
Questions not yet addressed that come to my mind:
Will data ever be deleted from the network? Can it even be deleted? Say someone tries out MaidSafe, uploads a few Gigabytes of videos and then decides he doesn’t want to use MaidSafe, never returning. Will this content linger on the network forever, not accessible to anyone because the original owner deleted his keys, being replicated over and over again as nodes are thrown off the network due to some outages, DSL reconnects or whatever? If not, that presents a scalability issue over time, doesn’t it?
Connected to that, in case I know I won’t need some data anymore, can I as the owner delete it from the network so that it isn’t taking up space and especially being replicated anymore? Especially for the use case of a key/value database where the contents might change rapidly, it might be important not to store every data point forever?
Also connected: Say my node is a vault and has currently stored Terabytes of chunks. Then there’s a network outage. The vault managers lose connection to my node and forget it, replicating “my” stuff somewhere else. Then my node gets connected again. Did I understand correctly that now it will get a cmopletely new ID and be in a completely different group of nodes with no connection to the previous vault managers, i.e. having to earn rank first, then maybe becoming a vault for others again—and all the chunks I had before are now useless and can be deleted from my file system as they have been replicated elsewhere, and my copy will never be accessed, anyway?
Another question: Each chunk is is broken into 32 pieces. What about very small chunks, i.e. I’m not saving a file in MaidSafe, I’m using it as a key/value database, and the whole chunk is only, say, 12 bytes in size?
EDIT: And another one: Messaging/streaming etc. is mentioned, but how is that implemented with that concept of data chunks that are stored on the network?
Thanks in advance for answers. This whole concept is very fascinating.