How Safe is the SAFE Network


#1

I have a problem getting my head around the SAFE Network, and I am have a particular problem about how things are actually controlled.

Step by Step the road to Fleming 2

describes how membership of the SAFE Network will be controled, and suggests the following:

  1. Don’t blindly add all new vaults
  2. Balanced relocation
  3. Node ageing

My confusion arises from the SAFE Network FAQ that is recommended for beginners

What is an autonomous network

This states that an

Autonomous Network is one which has no human gatekeepers. Anyone is able to join and -crucially - no one can be prevented from taking part. … it also constantly reassigns to different groups in order to provide total security and privacy at all times.

The two statements taken together imply that the intention is to have:

  • An automatic (non-human) decision whether to allow access to the SAFE Network.

  • An automatic (non-human) decision to create balanced relocation.

  • An automatic (non-human) decision to take node ageing into account.

My problem is getting my head around whether any of these aims are actually achievable in a real world setting.

For example, they imply setting up a system that has no human control whatsoever.

That seems fine in theory, but in practice is horrendously difficult to do without errors. However much you tweak the system in its beta phase, once you release the production phase, by definition you then lose control. The Network has to be 100% right first time. Not 99%, but 100%!.

If it is not 100% right first time, which I believe is impossible, the SAFE Network will need some form of adjustment in due course. I understand that most adjustments will be done automatically via an algorithm, but unless that algorithm can 100% predict all future events, there will still need to be human manual adjustments made at some time in the future.

The problem I can’t get my head around is that it seems extremely unlikely that the SAFE Netwok will never need human adjustment over time, but if there is a means of adjustment there will also be a means of hacking the system.


#2

Hey, welcome to the forum. The idea is indeed to let SAFE be SAFE without some people controlling a group of servers. Think of it like BitTorrent but without the trackers. So some file was uploaded to a website, the website (and tracker) got down, but using DHT still thousands of people can download the file. SAFE does that in a more advanced manner, using different algo’s and no centralized tracker in the first place. But still, as it uses hashes to check for data-integrity, every file is right to the last 0 or 1.
A file on SAFE gets chunked in to 1 mb. pieces. And each piece is stored on at least 8 Vaults in the network. And if 1 or more vaults go down, the network will ‘heal’ this error by looking for different vaults to join the group. That way each chunk of data is always stored on 8 systems. And if you request that file, it comes your way. Correct to the last bit.


#3

Must be my age! :grinning: (mid 70’s) but there was quite a lot in your answer that I had difficulty following.

I do understand how BitTorent works, and use a Bit Torrent Client, but these don’t work without being periodically updated and changed. BitTorrent Clients have changed regularly since the first Client was installed in 2001 and all those changes have required human input.

Using BitTorent costs me nothing, and a failure of BitTorent would not cost me anything. Failure of the Safe Network may prove very costly if I trust my ‘history’ to it the way that people currently trust Facebook with their ‘history’.

My bemusement is that, unless I’ve missed something, there seems to be no physical human way of changing the Safe Network algorithm.


#4

I think you’re going one level too meta on this one.

Us humans are designing the SAFE Network. We are designing rules for these points. For instance, we are picking the acceptable size of network sections, how nodes move around in the network, how data is managed etc. There are many decisions taken here and we are trying to be as transparent as can be on the motivations for each decision. Roughly, we try to think very hard of what can go wrong with certain ranges for each setting and exclude settings that we can already see won’t work.

The point is: while the network is running, no human will be sitting at a server farm and adjusting settings to decide how to route messages, who to accept in the network or any such thing.

Now, there is the question of network updates which is quite crucial. If updates were simply pushed by Maidsafe and trusted, an adversary could attack Maidsafe to attempt to push a malicious update that takes over the network. The idea long term is to have some kind of automatic vetting of updates + consensus between humans (don’t have to all be members of Maidsafe), maybe with multisignature schemes or such. In the early days of the network, it is likely that we will need to trust Maidsafe as a source of authority for the first few upgrades until we develop such a mechanism.

So upgrades are the sticky part for “non-human decisions”; but hopefully this clarifies what we mean by “an autonomous network”.

if there is a means of adjustment there will also be a means of hacking the system .

Correct. That’s why we will need to design a governance model to minimize that risk. No system will be 100% foolproof on this front. We’ll just try our best to create the best feasible system, with the help of the entire open-source community.


#5

Thanks for your answers @polpolrene and @pierrechevalier83

It’s a lot clearer now. Think I’ve got the gist of it.

The SAFE Network will run day to day without any human input, and you are still working on exactly how this will be best accomplished.

When the SAFE Network needs to be adjusted / updated, this will be done by humans (presumably by someone at MaidSafe) and you are currently also working on the process that these updates will be undertaken.

Thanks for clarifying things.