Why would updates be free? It’s still writes to the network. Isn’t it already generous enough that data written to the network is paid for once and stored/read indefinitely at no cost?
Note my quotes around “free”, it’s not for free, it’s just paid up front. In terms of reasons behind it, one of them is to reduce the traffic and work involved in making updates to client owned data. Once it’s created it’s already paid for its storage, Elders only need to check for the owners signatures to apply the mutation, no need to vote or any type of consensus, so very fast. But since you wanna prevent from spam attacks then you need a limit, you can spam your own piece of content, but every mutation is consuming your “credit” in that piece of content until it’s all used, at that point you’ll have to top-up (options for this is under discussion still, several options there)
Are there bounds to the mutation?
Otherwise, what’s to prevent an attack vector where someone creates (and pays once to store) a mutable 1KB upload, then in the subsequent free mutations, mutates it with randomly generated 100TB in a loop? That’s to say nothing of attacks which differ in tactic from growth, such as a brute forcing of looped updates within the initial upload size. (1KB)
It’s not a risk per se’ but on some occasions you need 2/3 (AT2 BRB etc.). I will try and explain, many project miss this, but there are several consensus/agreement parts.
Confusion to kill
Folk talk consensus when they mean order. I use consensus to mean agreement. So we use the word agreement.
Then folk talk of quorum to mean number of voters. We use the word majority to mean the % of voters who voted in some way, where quorum is a number for minimum number of voters, however they vote (subtle, but I fear the industry i s wrong to use quorum the way it does).
So when we want no fork (i.e. currency wallet) we need a total order on the transactions to allow atomic credit/debit and no chance of a fork (double spend), for these cases we set majority to >2/3 and call that supermajority.
Reason being the majority of the folk that did vote are honest nodes (as dishonest nodes are <1/3). So dihonest nodes cannot now get some nodes to vote send £100 to Bob and agree with them (creating a majority) and then they say send the same £100 to Alice and they cannot get supermjority as at least >1/3 good nodes already sent it to Bob.
If we use a strict majority then you can do a doublespend quite easily as you will see from the above example. The reason being in a strict majority you can say there is at least 1 honest node, but you cannot say any more. So you can doublespend in this way.
For data that can fork and resolve or not then strict majority is enough. To make life easy we have >2/3 across the board right now.
Yes, that’s the limit we are talking about above, there will be a limit to prevent from these spam attacks.
Yes the total data you can store in the mutable data types is capped. So with just ops then we are looking to make those ops/entries limited in size.
There is a glitch in this one though that I have yet to bring up in house, but we may not be able to make use of this optimisation, the cap, yes, but perhaps not the optimisation of pay then free (limited) updates.
Ok I’ll wait strategizing exploits until the design has more time.
For now, may I ask why you wouldn’t simply charge for any write to the network, irrespective of it being a create/update/destroy?
That’s one of the reasons, or the main one. This is now very much aligned with the CRDT nature of our mutable data types.
That feels like a leak of an implementation optimization into the economic model.
IMHO it still feels more intuitive to just pay for whatever I write, and internally if that’s implemented in a system of op credits + opaque credit refreshing, so be it. That has another benefit that if you’d like to modify how the optimizations work later, you don’t disrupt users’ established expectations of how many whats they can do when. Just my 0.00000002 SNTs.
Yeah…who knows what other ideas we all come up with in the future, what I describe doesn’t mean it cannot be improved in the short/medium/long term, it’s our first step. Also, we need to consider this is a distributed and decentralised network, and there will always be trade-ofs specially when you compare it with centralised systems, and the outcome of each of these trade-ofs may disrupt almost always a subset of users’ expectations. This is how I’m personally seeing things at the moment at least.
Fabulous AMA video, @JimCollinson !! Really, really well done and a valuable resource to share.
Yea a few ways to skin this one. it’s superb we can now and quite easily without rewriting the world. I expect us bouncing around a bit here form week to week until we settle on the simplest most efficient way. Only yesterday evening I realised a flaw I think, but we will see, nothing major in any case.
Pay and limited re-writes.
Having unlimited re-writes seems like a issue to me. It’s unlimited until it’s not and breaks everything.
Sorry, the chicken little in me felt I should yell the sky might be falling.
Love your optimism @Sotros25
I mean I get that you want to add it, but I’m getting the feeling that this won’t be the last addition.
At some point people commit to a solution even though it might not be the best, but it is available
I hope MaidSafe recognizes that the market rewards delivery, not completeness.
I like the line of thought of this. If it costs more to charge, then why not make it free? I’m pondering the result of this, but it makes practical sense.
To add my immediate thoughts:
Once all mutations have been made, could it be linked to a new data item? For sequential types, having an infinite number of mutations allows use cases which treat it as a list or message queue. If there is a cap, it would have to be worked around.
Is it just the owner that can mutate for free? If it is a public data item, it would be frustrating if someone else filled it shortly after you created it. This links in with the above too. Or does it only apply to private data, I which case spamming would seem rather counter productive.
It could encourage people to contribute data (not just spam) if public, which could be a positive. Whether it would outweighs spam? Maybe not.
It sounds like we need some more flesh on the bones though, which is no doubt creating internal debate.
They outlines that the network wasn’t stable without this. They were trying to make it stable with other fixes, but realised it was better to buy a new bucket than plugging holes in the leaky one.
Sometimes, bugs require a design change to effectively resolve. If it provides other improvements, it is win-win.
We all want a test net. It also needs to work. A broken test net isn’t useful.
Like In said, I get the addition. My point is more on the fact that it’s not the first addition, the original spec of fleming was also much smaller. And my fear is that it won’t be the last.
Again, i get the additions but i hope people are also mindful of time.
Doesn’t a token transfer require the owner to initiate it? That is, the Elders only validate/ counter sign it.
So, to attempt to double spend, am I correct in asserting that the user would need to collude with the Elders to authorise an invalid payment (e.g. when insufficient balance)?
I’m just trying to pin down the surface area of this attack.
If they were adding it just because it was a nice feature, I’d agree. However, it sounds like it was to fix a bug that was harder to resolve than without it.