I have been looking for a safe peer2peer technology since at least 5 years and I appreciate the development here. My questions are for a specific scenario in Civil Protection:
My background are disaster management applications that are used in GO/NGO catastrophe relief organisations. One of the smallest cells in this scenario is a “command post” (CP). You can think about it as a small sized working group on a local network only. We are typically talking about 3-10 computers. Especially in the beginning of such an event there is no internet connection. (Think about events like the earthquake in Haiti 2010 or the earthquake/tsunami in Japan 2011).
Having that said, especially in the beginning of such a situation the CP has to function by itself without internet connection. It makes sense not to rely on a local client/server architecture because this just adds more single-points-of-failure.
Since data in safe network is shredded and distributed the question is: how does this work if you only have 2 or 3 machines?
Maybe for bandwidth or “security” reasons a CP would not want to spread the data to other machines than a defined set of machines (based on maybe a key or an MAC address). Is this something that could/will be supported?
Thinking about a lokal workgroup where files are not only shared with the group but also worked on by multiple members (like (XLSX) lists for staff and utility planing or documents (DOCX) situation reports): how does this integrate into safe network? Is there any kind of versioning embedded? Or is there a locking feature so that only one user can change files when it is open?