If I got it right, the Vaults will be responsible not only for managing the data on the Network, but also the transactions. But these transactions are free for the users, so what prevents a user from sending and receiving multiple spam transactions just to flood the Network?
With AT2 (asynchronous trustworthy transfers) a lot of the work for a transaction is on the client so that helps reduce the load on the network but I would guess there would still be some kind of measures. I’m not sure on that so I’d be just as interested to know but the Maidsafe team never cease to amaze with their grit and determination.
IOTA and NANO have an approach of adding some kind of small PoW on the transaction for the client.
Does this AT2 you mentioned works with a similar idea?
I’m not extremely informed on AT2. The safe network primer would better explain than I can but basically the client wanting to perform a transaction has to gather signatures and what not and then send that to an elder group to check and sign as opposed to the client making a transaction request and the elders on the network having to do all of that work.
I don’t know that I would consider this PoW in any capacity though.
Caching causes the read operations to move closer to the requester. If multiple requester are spamming content requests then the requested content will be cached and have a lot less impact on the network.
In the case they are writing then it will become an expensive exercise to do a serious spamming attack since each write will cost a tiny amount of SAFEcoin.
EDIT: An interesting side note is that writing spam attacks actually help long term the economics of the network because the spammed data will usually never be accessed again after the attack. Caching handles the read spamming and thus we see a net increase in the network’s coinage. Negative effect is wasted storage, and if spam same data then dedup helps
AT2 forces the client to send money in sequence. So right now* you cannot send more than one transaction at once. They will be discarded until their sequence number is correct. (ie. If you sent transaction 39, 40 will not go through until 39 is complete and registered on the network.)
Client must do the work of collecting and combining signatures from elders to validate the transaction to be able to write it to the network.
This is valid for all transactions, so any write operation will need this step to pay for the data too.
* There are ideas about being able to batch transactions to be looked into later, though nothing is solidified as yet.
Now you bring up a bottle neck for when i want to upload my 4GB video file. The time to write each chunk has not increased. No parallel uploading of say 10 chunks at once.
So if comms is 2 seconds per chunk and payment is 1 second, then that is 3 seconds per chunk and its all sequential. Approx 12 thousand seconds.
Now if I can send say 10 chunks at once then payment for the 10 is 1 second (parallel) and since most comms also is faster if multiple paths are being used (say 2 secs for one chunk and 10 seconds for 10 chunks) So 4 thousand seconds with reasonable overlapping of payments and chunk uploads
There is a massive difference between 12 thousand totally sequential and say 4 thousand for a maxing out of a 40Mbit/sec uplink
It’s a good point, but one that’s addressed as soon as we have batched payments. Which there’s no reason I’m aware of not to be able to have that, just that it’s not implemented yet. (I think @oetyng may have even had some code around this idea related to farming).
I think there’s other options for negating this too, though I’m not sure of what the current status is.
I for one though am not worried about “slow writes”. I am sure we can speed them up, but if we get a single write at sub second then we are in a good place.
Upload may take a long time for a multi Gb file, but that might actually be good for the network. However a small write, like comment on a post or a new web page/blog post/e-commerce transaction etc. should be well withing a reasonable time.
tl;dr I hear the issue of slow to upload large files, but I am not that concerned at the moment. As fast small data changes will be likely the vast majority of use for people. Interesting angle to get into soon though.
Less impact doesn’t mean no impact at all. In addition, I can manage multiple accounts, use VPNs, proxies…currently there are several options at low cost for an attacker. If it’s free for me but it’s not free for the network, you have a problem.
I can create a robot that sends a new transaction always after the previous one was completed. I can create numerous accounts and do this with all of them. What would stop me?
What is the cost of this? Have any stress and attack simulations been done? I would like to see the cost to the client and the cost to the network.
I’m not an expert, I’m just trying to understand.
Every one costs safecoin.
Both clients and Elders will do work, the Client does more work. Signing is a cost, validating a sig is more expensive. The client has to validate several sigs to aggregate them (even more expensive). So the exact cost is not easy, but the client certainly “pays” more.
We will have more testnets where much of this is tested. I am not a great simulator fan, it’s best we do real-world tests and measurements.
But I thought that safecoin transactions on the network were free.
Now things are starting to make sense to me. I would like to see some material explaining that the costs for the client to send transactions are higher than the costs of the Vaults to validate.
They are, if we talk of only safecoin transaction (sorry missed that part) then they are “free” in money terms, but not in “effort” or “work” required. The network will do less than the client.
With the data structs now more CRDT complaint we can go further and possibly have clients merge data etc. to help the network. So like adding hashcash/pow but in a manner that’s helpful and not wasteful of energy. It is not deep dived though, yet.
David, please can you spell this out a bit for me, it sounds very interesting. I think I understand how the client is required to sign off a Safecoin transfer. So I assume you’re thinking, week the client can also do the CRDT merging or something like that, and then send something to the network for validation and signing. If so, can you give any examples of the work being done and the thing being sent? It’s intriguing!
Yes, so here is the uncut/unclear version. The set of replicas in normal CRDT is where the action is. They do the updating/merging etc. The merge trigger (to get them all in “sync” ish) can be on a read or write or some other thing. The merge is guaranteed to merge (proper semi-lattice) but needs a bunch of replicas sending info to each other, either deltas or the whole state. Delta based CRDT’s are quite new and have some landmines in there.
So we take the client (
Actor) and make it also a replica. AS the client can work offline then he can be initiating operations etc. as usual, but it holds the state or a state of the data. So we say “hey client” you’re a replica, but we trust no single nodes on our network, but we do trust clients (they cryptographically sign updates, etc. so really we trust the crypto). So this is a replica we can trust, So let’s make that replica do all the gathering of all states from all replicas and merge them. Why? Well the client will want it’s data stored safely and securely and as updated as possible.
So potentially on write the client has to merge all states and send the completed and verifiable) state back to each Elder (replica).
A bit unclear, but I hope you can see my direction here? It’s to get clients to do valuable work on behalf of the network and the data.
[Edit, even now the client is doing most of this via AT2, but it can do more]
Thanks David, that’s really helpful and I think a very important benefit for the network. I hadn’t realised that CRDTs had these extra benefits. It’s been a very fortuitous that SAFE’s efficient style of concensus coincides with this. I think SAFE will surprise a lot of people who have been trying to find solutions to these issues when it bursts onto the scene.
I’m glad to hear that.