Profiling dbc performance

I completely agree. There’s a lot of nuance to dbc performance. Getting bogged down in ‘the numbers’ might be a little misleading at this early stage because there’s a lot of unknowns.

What we do know though is this is the rough process:

  • a new transaction will be signed by the client using bls keys, which takes roughly between 0.3ms - 3ms depending on the size (source, blst is used for sig/verify).

  • the new transaction signature will be verified by each mint node (individually and in parallel) which takes roughly between 0.8ms to 3.5ms (same source as above). There are 7 mint nodes in each section (the elders). We need to account for network latency here too, which will be much more significant than the verification operation. There’s a lookup to check this isn’t already spent, but I expect that to add negligable time to the operation compared to the crypto verification.

  • the verified client transaction will be signed by each mint node (individually and in parallel). This will take another 0.3ms - 3ms, plus the network latency to return the mint signature to the client.

  • the client aggregates the mint signatures. I don’t have exact performance stats for aggregating 5 sigs (5 of 7 mint sigs are required) but it will probably be around the 5ms to 500ms mark (source but these are javascript stats compared to rust stats above).

  • the client sends the new dbc produced by aggregating the mint signatures directly to the recipient

So the sequential operations are: one signature op, seven verify ops (parallel), seven signature ops (parallel), one aggregate op, taking maybe half a second total; network latency would probably be around half a second. Adding this up I’d guess any one transaction takes about one second from start to end.

On the side of ‘in reality it will be faster than one second’:

  • network latency is variable so it may end up being faster than this estimate.

  • client and node cpu performance is variable so it may end up being faster than this estimate.

  • optimization for bls signatures may make this much faster (eg fpga or asic or custom future cpu instructions, current fpga results are 3x faster)

  • there’s a lot of parallelism possible in this flow. The client can do multiple transactions in parallel, the mint nodes can do multiple verifications and signings in parallel, and there are many mints (one mint per section) all running in parallel. So we need to understand how the parallelism feeds into the performance metric. As others have said, having lots of sections will enable a larger parallel throughput (ie better total tx per second) than having fewer sections.

  • there may be other cryptographic optimizations that improve throughput (eg rollups like eth2 is doing)

On the side of ‘in reality it will be slower than one second’:

  • network latency is variable so it may end up being slower than this estimate.

  • client and node cpu performance is variable so it may end up being slower than this estimate.

  • mint nodes (elders) have many other duties than just verifying and signing dbcs, so they may have some extra latency from that other work if it takes priority.

  • there may be additional crypto operations such as range proofs or ring signatures depending on the level of privacy being implemented in dbcs that could slow this down.

It’s great that people are interested in performance but there’s a lot of subtle nuance to the topic and I feel quoting numbers and scaling factors is misleading because we don’t really know for sure very much right now.

The best way to look at it is to really understand the basic operations being performed for a transaction and then scaling that baseline performance up depending on your individual expectations of parallelism / latency / future improvements for SN.

If you want to really get into the weeds on this I recommend playing with sn_dbc_mint which is a manually operated dbc mint, it’s really cool! GitHub - dan-da/sn_dbc_mint

13 Likes