Following on from the long and fragmented discussion in the Fleming Testnet Release topic (currently 32 posts containing the word ‘queue’ so I won’t link to all of them) about nodes joining the network.
This is a topic for brainstorming so please keep an open mind and try to respond to the strongest plausible interpretation of what someone says, not a weaker one that’s easier to criticize. Assume good faith.
Are new nodes put in a queue or are they simply rejected to try again later? Which of these is more desirable? How does each technique affect security, what mechanisms we can use (such as proof of resource), how should we prioritize the queue… a lot of discussion.
It’s a really interesting and important topic so I feel worth having a topic dedicated to it.
My main question for managing node membership: is there a difference between an extremely helpful operator vs an attacker? As far as I can tell they would both look the same to the network when they try to join.
Some quotes from the 32 posts that inspired this topic:
No queue , it’s just whoever is lucky enough to request to join at that time.
We could create a queue but it defeats the purpose of this anti Sybil attack measure - this is to stop someone with bad intentions from taking over the network with hundreds or thousands of nodes all joining in succession.
right now you have to spam join requests to get a chance to join.
Would a queue with some kind of resource intensive proof of ability be possible? letting the network pick the next strongest node.
we will implement a way to auto retry you if Network is not accepting, or something along those lines
Nodes will join a queue, stay connected and be selected at random (based on the hash of the lost or full node message, so ungamable)
This requires storage of the queue, and updating it if nodes go away, and handling if it gets too big. Might it be prone to spamming in order you get more nodes in the queue that someone else? I guess repeated join requests (with no queue) are also open to this kind of attack.
Even then, you get the issue of the queue itself being spammed with join-the-queue requests, just like the network so I’m not seeing how to solve this!
IMO, as a natural defense, clients should be expected to perform more work than whatever the network would perform. Ideally the work is useful to the network, however that’s not necessary to dissuade spam/flood attacks.
what’s to prevent an attacker from spinning up a bot farm to spam joins and either blowing out the queue’s storage or starving the network of capacity growth by being unable to find legitimate nodes to enlist?
if the random selection is the algorithm, what’s to stop a 3-letter from continuously signing up a bargeload of nodes all over the world and then degrading/corrupting output if/when they are selected? Conversely if only the highest quality nodes can be selected, what would prevent Amazon/Google from taking over?
What if every queue selection for node hosting required burning e.g. 10 safe to the network as an anti-spam measure?
How does someone earn in the first instance though?
As for useful resource_proof ideas, would checksumming a random data/block qualify? I’m thinking of it like a ‘free,’ externally performed and validated, continuous ZFS scrub. Using the scenario of a node joining the queue, the node is given some URL and a hash which can either be what’s expected or a rand() fake, the node performs a read of the URL, calculates the hash of its data, returns whether the hashes match, and if the answer is correct the node is entered to the queue.
A challenge set by one person in order to join, assigned to a random other also trying to join, which they must answer. By submitting two proofs, one of having set a question that’s answered correctly, and one of answering a question correctly you join the real queue.
What if you had a “proof of human” check for the first node under an “account” (or whatever you guys are calling the key combo) and then future ones are automatically added to the queue once the first node gets in and proves to be a good member?
Time is the great leveler, as its cost applies to everyone equily, so the length of time a node has been queuing - and providing full services to the network - would be considered a cost it has paid. This is effectively a proof of work exercise, but one thats similtanioulsy useful to the network.
PoW is centralising because those with more money/resources can dominate uptake.
Letting everyone in means everyone gets a chance to earn and I believe would make it unprofitable to try and swamp the network because: 1) rewards will be spread thin and it might make this unprofitable quite quickly, and 2) the only incentive is profit from rewards as you can’t easily take over a section this way. Actually, as I write both points I’m not sure they are true, however.
Those with more money/resources have always, and will always, dominate everything in the material world. They’ll operate the majority of the safes unless the selection algorithm prioritizes balanced clearnet address space distribution, and even that isn’t a guarantee.
If it’s trivial to starve the network of additional nodes, that’s worse than if it’s difficult/expensive. If spam/flooding isn’t managed, to say nothing of more sophisticated attacks, who runs the majority of safes will be irrelevant because the network will be unviable.