Greetings, I have been examining a potential use-case of the Maidsafe network, or a derivative fork, in securing background investigation data by the government. I am not a programmer or coder and thus my understanding of the nuts and bolts is somewhat limited. Nonetheless, I would appreciate the community’s feedback on this idea. Is this possible from a technical standpoint? What are some weaknesses in the proposal? I have provided a link to the paper which is hosted by google. I’m not sure if this is the best method for sharing the paper either, so if you have any recommendation on this as well, please let me know.
I think a better question would be why does the government need that information in the first place when the owners of that data can store it on SAFE and maintain control of it then create smart contracts and hashes based on it? Why give it to the government in the first place?
Good point, Blindsite2k. Unfortunately the government tends to like to keep control of their own pre-employment investigations. While it would be entirely possible for people to store their own personal identifying information on the SAFE network and only allow the government (or any employer) access to it on an as-needed basis, the government would most likely balk at storing their own investigations on it. By their nature, these investigations and judgments contain very intimate personal details, and if they are retained on government servers in the current centralized fashion they will continue to be a risk for any and all government employees.
-To be clear, this proposal is dealing solely with personal information submitted willfully by prospective and current federal employees and contractors in order to obtain a security clearance.
I’m not a technical person either, but from everything I know, I see no reason why governments couldn’t use the SAFE Network. It will be secure for everyone to store whatever they wish, securely and privately. No reason any government would be excluded.
The question is whether a government would. Maybe it would use dedicated a fork, run a couple thousand nodes around the world, with large capacity storage. then it would be a question of keeping secure the data maps to the data and any pre-storage encryption keys which would probably be applied prior to network encryption, chunking and storage.
I’m thinking the behemoth will move too slow on this.
It is no secret that Tor was created by the US navy. It stands to reason that they made it public and open-source in order to build a large crowd (a couple of million users last time I checked) for their spies to hide in. A Tor whose only users were a few hundred spies would be useless to the spies.
Something I learned from the commentator on matters cryptographic Kristov Atlas is that anonymity is not an absolute thing but is simply a measure of how big a crowd one can hide within, and that if that crowd is big enough then it becomes impractical for an opponent to find you.
So if SAFE works, then their security services, at least, will probably use it, and they will use the main branch rather than a fork.
True for the “spy” use case.
For storing records such as those mentioned in the key post of this thread, I’m inclined to think that gov’t types would be more comfortable using the technology on a mass of their own machines spread around various locations. But hierarchies tend to build in the castle and moot model, so on institutional, bureaucratic levels, complete distribution and trusting trustless systems is a challenge .
Of course. The idea that a government (any) could put citizens data on systems that are outside their legislative control is quite absurd.
I agree that it is unlikely that the Gov would allow nodes outside their direct control, that is why I suggested a proprietary system that could be applied solely to their own devices. The government has millions of hard drives, no reason they can’t be put to use. Although I fear you are correct about the behemoth moving slowly.
Ah, okay, so a private SafeCloud.
That is fine use case but don’t forget that SAFE creates multiple copies of data.
For private archive type of cloud, the preferred approach the days is to use erasure coding, meaning a file of N chunks is added some parity protection (M chunks), and then saved across N+M servers.
So with (say) 12+6 you save a file across 18 servers, and you can lose any 6 without any impact and despite this the storage efficiency is 12/18 (quite high).
With P-way replicated SAFE you can lose not more than P nodes and theft inch is 1/P, so a 4-way replicated SAFE would be less reliable and very cost ineffective compared to these modern approaches specifically designed for archiving.
I posted a comment on this forum this week in which I said these modern solutions are very good, secure and require almost no management (they tend to use a minimal or at worst stripped down OS). It’d be extremely tough to compete against such solutions for private cloud opportunities.
Thanks for the info, Janitor. It sounds like a better mousetrap already exists! While I still find the concept of secure distributed storage intriguing, perhaps this isn’t a particularly good use-case after all. Although I do wonder why the government isn’t using (or wasn’t recently) solutions like the one you mentioned. Seems like it would be a no-brainer.
Yeah take a look at this news for an example of such large systems based on erasure coding