Scaling writes to an MD

If an app stores some state in an MD, what happens if that state is updated with a very high frequency? Will there be a limit to how often an MD can be mutated? Can the number of vaults responsible for a particular MD be scaled up?

2 Likes

I’m not sure the number of vaults responsible for an MD would increase throughput.

I do not know exactly how the request looks like, but let’s say there is 1 write and the request returns OK, and then replication is carried out asynchronously. Increasing number of replicas will have no effect on first write.

Let’s say request is returned after n replicas has been written to (normally a quorum). This can make writes slower as the group grows, unless they all handle the replication simultaneously. I must look closer at how this works, but naïvely, I would let all replications be carried out in parallel, with some retry policy on transient network errors. In that case n writes to replicas more or less equals 1 write.

1 Like

There is some info on this in the MD RFC with respect to simultaneous operations, but nothing on how often operations can happen, so I assume no limits on that.

  • Not more than 5 simultaneous mutation requests are allowed for a (MutableData data identifier + type tag) pair;
  • Only 1 mutation request is allowed for a single key.
2 Likes

I would think it would lead to a bottleneck though. If the responsible vaults couldn’t handle any more writes, it could in theory scale out and write to some other vaults instead, but then the data would get out of sync and it would at some point have to sync all the vaults, but then if they’re already maxed out you’d just get an ever increasing queue of updates.

Hm, I guess maybe you can not initiate a new write until you get a reply back that the last one is done?

Then if you keep spamming you might be rate limited?

1 Like

Looking at the API you have to specify the version when you update, and the network checks that against what they have and the update fails if it doesn’t match.

4 Likes

Optimistic concurrency which robustly makes sure a write is only being performed to the version you expected. And for that, concurrency check + write need to be single threaded.

With high contention I guess this would not be the fastest way.
Especially if we use the pattern of re-reading on version mismatch, then applying change to the read value and submitting again.

Rate limiting I guess would be needed if it was possible to enqueue so many calls that other writers access got impacted.

Scaling up the writes though, as the question was about, that would be negative to consistency. But additional checks could be made on what part of the MD was changed. As long as not same parts are affected, a version mismatch wouldn’t necessarily have to be rejected. But that depends on the logic using this information, and it’s requirements.

1 Like

Hm, thinking a bit more about it…
I guess, if there are huge bursts of information being written at times, spreading these over various MDs, and later reconciliating the results, could be a way around contention. The origin of the request would need to add a timestamp with granularity high enough (which would be the differentiator for the various writes to the supposedly - i.e. virtually - same MD).

Constant high load over the highest rate, that would maybe be possible to solve the same way. You’d just bandwidth and processing power to manage all the produced data at that rate, when reconciliating it. Edit: Then, at a rate that the MD writing would allow, the actual MD could be updated with this value.

Not sure if there is a use case for it, or if it would be a preferred solution ever though. This probably has some logical error somewhere (for constant load over the network throughput capability) which makes it totally unfeasible. Not sure how the reconciliation could be done, so that the information can actually be used later.

2 Likes

Yeah, for some stuff, like a counter, you could shard the MD, turn it into lots of MDs and then choose one to write at random, then when you want to get the total count you fetch all of them, which should be pretty fast since you could do it in parallell.

1 Like

For this kind of thing as well you should consider lock free data structures and also CRDTs Conflict-free replicated data type - Wikipedia where possible. Although a vote for a single version update at a time where competing attempts means many fail, but 1 succeeds is another mechanism (similar to monotonic updates, which are simple CRDTs). There is a lot to consider in multplle source updates. We have done a load so far with a ton of testing, but I feel there is a way to go to fully make use of tree like CRDT aligned structure (i.e. not a crdt fully but makes use of the properties) similar to what we do in data chains part 1 with branching and selection of the correct single truth through a DAG (like a graph structure but a single agreed node->edge->node… route is successful.

I am aware we have not formalised this fully yet or even created an RFC from DC part 1, but we will.

5 Likes