What exactly happens when a particular file or safecoin address gets DOSed?


#1

And are the methods for mitigating the attack the same for safecoin addresses and files?

From what I’ve read, the transaction managers for a safecoin address all have to agree to the balance on the address before sending it to a requester. What happens if I send thousands of requests a second for an address’ balance? Or an address QR code gets shown live on the superbowl? Or I send money between two of my addresses indefinitely (since there are no transaction fees)? Will MaidSafe add potentially 1000s of new address managers, of which 3/4 of them will have to agree on the balance/transaction (seems unscalable, though)? Or will the address simply not be reachable/usable until the DOS stops?

Thanks in advance. Couldn’t find an answer to this question using the search.


#2

Transaction managers are groups (of 32) and they all filter traffic. This filter is to prevent replay attacks. If you tried to play loads of data and were a vault you would be de ranked and disconnected. If a client then you talk through your own close group who would not be too happy (they also filter you) :slight_smile:


#3

I should add as well this is the poking holes in the sea thing I chat about as well. If you did manage to dos a machine in the group the next closest takes its place and so on.


#4

@dirvine:

I should add as well this is the poking holes in the sea thing I chat about as well.

Love it. I haven’t heard that before. Another gold dust phrase to nick for my slides, and file of quotes :slight_smile:


#5

Ah, I see. So the nodes closest to you that you connect with are the ones that would attempt to block you, or would get replaced if they become unreachable. Meaning that the safecoin address would always be reachable. That’s pretty smart.

What are the rules for nodes in your group to block connections? I can’t think of an example right now, but I’m sure there can be apps that will require high volume of communication, like some kind of live game or something. We would want nodes thinking that was DOS attack.


#6

At the moment every message is firewalled (prevent repeats). De ranking will be done where nodes deliver incorrect data ( we check hashes and signatures), the rules will grow here a lot after launch, the key is simplicity though and that’s hard as its easy to think of a rule, but its incredibly amazing how quick fix rules blow up the system (or can). This is a real area where fundamental types have to be rigorously gone over to ensure no fast rules are ever implemented (many folks quickly come up with a rule and the side effects manifest in really invisible and bad ways).


#7

Thanks, David. Where can I read information on what happens when a file or safecoin address is legitimately overwhelmed? I can’t remember where I read it, but additional copies are made to be able to handle the requests. But in cases like safecoin addresses, which are maintained by transaction managers, how many can be added? And how many of them need consensus on the balance of an address? How do they deal with high traffic?


#8

No worries, if what you are after is not in the system docs then we will update them to make sure it is.


#9

I will look forward to it. Meanwhile, would you mind a 3-sentence explanation? :smile: Just in layman’s terms, what happens when a file or safecoin address becomes overwhelmed with retrievals/transactions?


#10

It does not get overwhelmed, the xor space means as nodes vanish there is another right there. (poking finger in the sea again). So to overwhelm a node you need to be connected (unlikely) and if you are then you use all your bw to try and dos whilst another 31 nodes try to speak to you, if you get one off another takes its place and you need to speak to it. You are surrounded by a different (somewhat) bunch of nodes so then you are in trouble.

In terms of ddos then you need to ddos the entire network really and you are not connected to that so need to hop across nodes who will filter you etc.

This is all xor networking etc. like trying to bring down bittorrents DHT, but with an awful lot more security in mind and no nodes accepting connections from nodes not near it or who improve the routing table. If oyu check my blog I do try and expalin some of this. More to follow when I am finished with routing_v2 which will show people very clearly what can be done with a secured DTH and why even that is not enough, but eons better than what exists.


Is safenet DDOS proof?
DDOS App Farming
#11

Amazing, thanks for the explanation. That does indeed sound like a very good solution. I’ll definitely be doing more research on xor networking!


#12

@dirvine’s blog:

@erick’s lecture:

:wink:


#13

Additional question about xor networking: according to the lecture posted by @ioptio, each chunk is stored on the node(s) most closely matching the hash of the chunk. If that is the case, it seems that all four copies of a chunk would be stored on machines that are close to each other, possibly having the same close group of data holder managers. Surely that can’t be right, so what is the piece of information I’m missing here? How is the node on which a chunk should be stored determined?

Thanks in advance.


#14

Actually, I think I just figured it out. Correct me if I’m wrong here:

A chunk is NOT stored on nodes most closely matching the chunk’s hash. Instead, it is passed on to data managers whose IDs are closest to the chunk. They then distribute each chunk more or less randomly on the network (taking in consideration vault ranks), and it’s their job to save the exact node ids that store their chunks. So all requests for a particular chunk will always go through the same data manager group, as they are the only ones that know the location of that chunk.


#15

This is quite correct. But when it comes to closeness take in mind we’re talking about closeness based on bitwise XOR. So, you connect to some ip-adresses, then you get your personal file with the data-atlas inside. This is a personal file only you can decrypt. In that file there’s your routingtable with say 64 node-id’s. You connect to the four closest nodes (probably will be a bigger number with routing V2) based on XOR. So you might be in the US, your four closest nodes can be in Germany, Japan, the UK and Belgium. This has nothing to do with geographical places, it’s closeness based on some math. All the nodes and vault have an ID based on XOR.

So, if you wanna store a chunk a lot of things happen. Your balance is checked, etc. But when you have enough money, and you pass a chunk to your client mangers (again, your four closest nodes) they will just forward it for you. A chunk with a name like 5555 should be stored most ideally at node 5555, but with addresses with something like 256 numbers, the change is quite low that that node will exist. So each node receiving a chunk will pas it through to a node that’s closer to that address then he himself is. Finally in a couple of steps, a node will find that no other node it knows of is gonna be closer to 5555 than he is. At that moment he will store the chunk. Because XOR is indeed quite random (although it is very precise calculated) the chunk can go around the world 2 times before it finds the right vault. When you request the chunk again, you ask your four closest nodes for a chunk stored at address 5555. They will ask their closest nodes to 5555 for the chunk and the these closest nodes will do the same. Again, in a number of steps the right vault is contacted and the chunk will come your way. The chunk is protected by a number of managers and stored on different vaults. So if one vault goes down, the managers will make sure that the file is copied to another machine.


#16

Either you oversimplified the nodes, or I’m now misunderstanding something.

you pass a chunk to your data mangers (again, your four closest nodes)

You mean your client managers?

A chunk with a name like 5555 should be stored most ideally at node 5555

By “stored”, you actually mean “managed” by data managers, correct? So a chunk with a name like 5555 will be stored BY data manager 5555. He will send to a random node (with sufficient rank), which could be anything, like node 3137. But he won’t send it to node 3137, but 3137’s data holder managers, who will then pass it on to 3137. (However, 3137 is first chosen to hold the chunk, and only then this node’s data holder managers will be located. Is that correct?)

They will ask their closest nodes to 5555 for the chunk and the these closest nodes will do the same.

Again, they will ask data managers closest to 5555, not data holders closest to 5555, correct?

Again, in a number of steps the right vault is contacted and the chunk will come your way.

What happens if a few years have gone by since you stored the chunks, and now there are a lot more machines that are even closer to 5555 than there were in the past? It could be that some of the old machines are still up, and compared to other nodes, their IDs aren’t even close to 5555 anymore (but they were in the past). How long will the data managers continue to ping each other to find the node holding your chunk?


#17

Any change to any group at any time kicks of account transfer info (i.e. node goes off line). So you will see a lot of work happening in us trying to reduce the traffic costs there. The faster we can transfer the faster churn we can handle and there for smaller devices can contribute effectively.


#18

Yes you’re right. I meant client-managers.

Client managers
Client manager Vaults receives the chunks of self encrypted data from the user’s Vault.

Data managers
These Vaults manage the chunks of data from the Client manager Vaults. They also monitor the status of the SAFE Network.

Data holders
Data holder Vaults are used to hold the chunks of data.

Data holder managers
Data holder managers monitor the Data holder Vaults. They report to the Data manager if any of the chunks are corrupted or changed. They also report when a Data holder has gone offline.

Vault managers
The Vault manager keeps the software updated and the Vault running; restarting it on any error.

Transaction managers
The Transaction manager helps to manage safecoin tranfers.

http://maidsafe.net/SystemDocs/how_it_works/vault.html

What I got by reading the systemdocs etc. is that if you run Maidsafe, a vault is created. Probably even more vaults are created, something like 16 of them each with their own XOR-address. As far as I know, if the address of one of the vaults is something like 5555, a chunk with the name 5555 will be stored in that vault (when the ranking is OK). The dataholder managers will make sure your vault is online and storing the chunk. They will ask for proof of resource. Next to your vault, the chunk is stored at at least 3 other vaults as well. So I think not a random place is chosen by the data managers but the chunk is actually safed on the vault with the closest address. To be 100% sure about that maybe @dirvine can reply about that part of the network.


#19

Thanks for the clarification. I’m assuming the rest of my comment was correct?

So hypothetically (very unlikely, but just to make sure I understand), if all four data managers you used haven’t been shut off for a period of years and the network has grown substantially, then it would be possible not to retrieve your chunk again? If there are many newer data managers with closer IDs than the original group you used, then when requesting the original chunk, will the new data managers give up finding the original ones, since they’re not even close anymore?


#20

I think you’re incorrect. Check out this video:

It explains that the chunk with the name 5555 will be managed by data managers with that ID, and those managers will select (random?) vaults around the network to store them. Would be great for @dirvine to clarify. And @dirvine, I apologize for bombarding you with questions, but I have to :smile: