If the SAFEnetwork would have no transaction fees, how would spam prevention work? Can someone spam the network with massive amounts of transactions? I also know that the team is planning on the network bandwidth usage being free of charge, so how would they prevent someone taking up all the uploading bandwidth of the network by repeatedly downloading the same data?
To answer in part.
Popular content will be cached. So constantly downloading the same data should result in that data being cached near the downloader.
So maybe they would cause their bandwidth to suffer, but it should have little effect on the network as a whole.
I believe some transaction cost may be neccesary.
Edit - i believe there will be something in place to detect bad actors, but i dont personally know how that will be decided.
Ok, i’ve actually searched before posting this. Bandwidth were discussed a little, but not much on transaction fees. Although regarding bandwidth, a malicious attacker, if they wanted, can keep on downloading new data that’s not cached, so even if repeatedly downloading the same data does not work doesn’t stop an attack on bandwidth from happening I think, is that right?
Sure, trying to spend same coin back and forth could cause this disturbance, but if you place the rate just below that, and then introduce same loop on n coins… Are we not seeing any extra load on network then?
If we are, what is the number of n needed for a given network size, before this is actually a problem?
And is the reason for test networks. I am sure this will be tested by people since it has been discussed in at least 5 topics over the last 3 years. Total network size will have a large bearing on what happens.
I expect this kind of thing will be tracked by the network so that the rate limit can be brought in to play only when a limitable activity is having a negative impact on the network performance.
No idea how that might happen technically, but if, as is envisaged, the network ends up being able to adjust key parameters in real time to keep optimal performance, it will be awesome. Tweaking these mechanisms can build resilience as the network grows in response to new threats.
I do this often, just crassly ponder the angles, because it’s quite interesting. As you say neo, the network size is the deal.
I think this thing here can be back-of-the-enveloped (boted?) if we know a little bit more about this risk of disturbances on high frequency operations on the coin.
I would not be surprised if it showed that with a fairly sized network, you would need a sh*tload of coins to make any dent. But I’d like (to do) the math to tell everyone (and myself) this
Would not this function limit the network’s operating speed for genuine customers? If not, a “malicious” customer can pretend to be many “clients” viewing different contents.
Also I’m asking a question, don’t know how a question can be wrong(or right)… but either way,I think it’s essential we try and think of as many ways to attack the network as possible before it launches, because it can have very bad consequences if an exploit is found once the network is live.
At least we know an upper limit on how many coins can be used to attack. 2^32.
But of course the real number will be a lot less and estimating this would be at this stage is in the realm of back of the envelope maths. For example if done early on and one could argue this is the most vulnerable time while the network is smallest, the number of coins maximum available will be on the order of 30 million since that is/was the largest single holding of MAID. Then we could work from there.
But at this stage we have no good metrics of consensus speeds and even less knowledge of coin transaction speed.
We do know that if you try to spend the same coin too fast then different sections will be trying to operate the one coin and at some point consensus will not be reached and the coin unmoved or lost depending on exactly what David was thinking of. So any attacker would have to wait for confirmation of the coin transfer before trying to operate that particular coin again.
That attacker would have to be extremely lucky to have “sequential” (XOR wise) coin addresses, so we can assume a certain amount of randomness as to sections being called upon to handle the 30 million coins. And randomness as to how many coins are handled by each section. I’d expect some sections are handling many times what some others are handling. Then if the attacker is so “lucky” to have coins handled by one or two sections then they should give up the idea and buy a lottery ticket. Of course if that were the case then the rate limiter would kick in too which is not dependent on the number of people attacking since the section is inherently limited by the number of transactions they can do anyhow and the coins are just queued up with requests being rejected before even tried if too many.
I do have some ideas to help with this but would love to get some time to talk with David about them. Some of the ideas others have had would go a long way to help, but might go against the grain of the design goals that David had or just not work with the current design.
And last but not least any such attacker has to be “wealthy” to be able to have any chance of a coin transaction attack. Which is why division cannot be done some ways because of the spam attack potential.
It takes more work for the client to create the transaction than the network to process it.
ie the bottleneck is the client.
Have a look at this post on the speed of modifying structured data which found the bottleneck to be on the client. Maybe larger networks will be different. Also structured data has changed to mutabledata. So take the results with a grain of salt but that’s the only test I know of that comes close to trying to judge / measure safecoin performance / attacks.
The update rate for structured data is between 100 - 300 updates per second per user (on my private safe network).
The current bottleneck to increasing this is the launcher, which throws errors when the load becomes too high (around 1000 updates in a row).
Rate limiting from vaults will definitely be a big part of it, as others have pointed out.
But ultimately this just needs more testing, simple as that. I agree with you that it’s a concern, but not an insurmountable one.