Profiling dbc performance

Following on from this benchmark result:

I wondered if 1 input to 100 outputs is a realistic scenario, so I had a look at bitcoin to get some idea of the range of inputs:outputs (not a perfect comparison but it’s a starting point).

I looked at bitcoin block 686201

Total txs: 2128
Size: 1288.673 KB
Virtual Size: 999 vKB
Weight Units: 3992.666 KWU

1614 txs (76%) were some combo of 1 or 2 inputs and 1 or 2 outputs. But there’s a few transactions that have a very large mismatch between inputs and outputs like in the dbc benchmark, so the benchmark is showing a situation that would happen fairly regularly but is probably not representative of most transactions.

The first row of the stats below shows there were 983 transactions in the block with 1 input and 2 outputs

An example of txs with a large difference is there were 2 transactions with 1 input and 501 outputs.

Might be interesting to try and recreate this set of transactions with sn dbcs and see how long it takes to process.

number_of_txs,inputs,outputs
983,1,2
360,1,1
176,2,2
95,2,1
60,1,3
58,3,2
36,3,1
27,4,1
24,5,1
18,1,4
18,4,2
13,2,3
12,1,6
12,5,2
12,6,1
9,1,7
9,1,8
8,1,5
8,6,2
7,7,1
7,8,2
6,1,11
5,1,9
5,1,28
5,8,1
4,2,4
4,3,3
4,10,1
3,1,10
3,1,46
3,5,6
3,9,1
3,11,1
3,19,2
3,50,2
2,1,13
2,1,15
2,1,16
2,1,20
2,1,23
2,1,31
2,1,32
2,1,34
2,1,48
2,1,401
2,1,501
2,3,4
2,3,12
2,4,6
2,6,3
2,7,2
2,9,2
2,10,2
2,12,1
2,13,2
2,24,2
2,25,2
2,34,2
1,1,12
1,1,14
1,1,19
1,1,24
1,1,26
1,1,27
1,1,29
1,1,33
1,1,36
1,1,39
1,1,42
1,1,44
1,1,45
1,1,52
1,1,54
1,1,62
1,1,85
1,1,91
1,1,93
1,1,99
1,1,101
1,1,104
1,1,179
1,2,6
1,2,7
1,2,8
1,2,13
1,3,9
1,3,30
1,3,39
1,3,44
1,3,49
1,4,3
1,4,7
1,4,77
1,5,31
1,5,47
1,6,5
1,6,8
1,8,3
1,8,5
1,9,10
1,10,9
1,11,2
1,11,4
1,12,2
1,13,1
1,14,1
1,16,1
1,16,60
1,17,2
1,20,1
1,21,2
1,22,1
1,23,19
1,23,74
1,24,1
1,26,1
1,26,2
1,27,1
1,30,1
1,31,1
1,51,2
1,52,1
1,52,2
1,58,2
1,61,1
1,86,2
1,89,2
1,90,1
1,95,2
1,118,2
1,142,1
1,200,34
1,240,1
1,348,1
8 Likes

I think it may even be 2 routes of thought here. Microtransactions such as pay for chunk are likely 1 → many. However, money as in folk pay for goods or to each other is probably much more likely 1->2 (1 output to them 1 for your change)

7 Likes

As every Bitcoin block is mined every 10 minutes, there is lot of transaction from exchanges accumulating all withdrawals to one transaction. With no blocks only some exchanges with approval withdrawals might have some bigger groups of withdrawals sending at ones.
The other groups of big transactions are the one sending rewards to miners/farmers from pools. But there will be nothing like this in Safe Network.

3 Likes

or 2 → x, 3 → x, 4 → x, etc. Because you may be combining several smaller inputs to make a larger one.

4 Likes

At 19,000 tx/s, I don’t think the Safe Network can handle a general use digital currency. If half of the world population (4 billion) made 1 transaction, then that would take 2 days to process (4B transactions * 1 second / 19000 transactions). I hope my math is off?

If half the world population did that with visa it couldn’t handle it either.

I suggest you first dig up figures on various conventional systems, and not assume that half the population of the world making a transaction at the same time is realistic.

I don’t know what’s a reasonable expectation myself, but I’d start by looking at what’s done at present. Then think about how long it would take before you might expect Safe to take over a proportion of that capacity (by which time is capabilities would of course increase).

Also note that 19k/s is not the capacity of Safe Network but of an early estimate for a test network.

4 Likes

There is a LOT of testing to be done before we can get decent estimates of performance.
As @happybeing says, that 19k/s figure is from a very small scale early test.
hopefully within a few days/weeks we will be ready to start testing DBC throughput but even then, our projections will be at best crude guesstimates as we will be working with fairly small networks and extrapolating from that.
Please help refine our guesstimates by participating fully in the testnets if you can.

I believe VISA generally runs ~25k Tx/s at most times, can handle sustained 40-50k/s and has peaked at ~80k/s - theoretically it could handle just under 100k/s.
I cannot remember the exact source for these figures and that was at least 2 years ago, so much may have changed

1 Like

Also this is a single section. So if we have say 1000 sections (reasonable) then we are at 19,000,000 TPS. Also many caveats, like 1 input to a million outputs will be very fast and that 19.000 will be more like several million. A million inputs to 1 output could be much slower. So a lot of the TPS measures will be based on how we use this (by we , I mean humans).

What we can see though is 1 in → million out is blindingly fast, so there we have a great step for micro or nano transaction.

In any case it’s not simple, but we are certainly not 19,000 TPS and more like several million tps in a reasonable sized network.

13 Likes

Worth a read No, Visa Doesn’t Handle 24,000 TPS and Neither Does Your Pet Blockchain – Blockchain Bitcoin News

4 Likes

See thats what I get for believing what i am told in the bar at FinTech Scotland events - especially from guys in creaky leather jackets

4 Likes

I’m encouraged to hear that 19k tx/s is based only on a test network. It’s a baseline. I want to help the Safe Network hold itself to a higher standard for throughput capacity awareness (compared to other DLTs). For example, do people really know what it takes for other payment networks to clear 1 transaction for just half the world?

  • Bitcoin @ 7 tx/s => 18 years
  • Ethereum @ 30 tx/s => 4 years
  • Stellar @ 1000 tx/s => 46 days

The performance of conventional systems is irrelevant to concerns about Safe Network’s PRACTICAL ability to host a payment network, either it can or it can’t.

I also think it’s more than reasonable to assume 4 billion people would make 1 financial transaction within 24 hours (which would take 2 days to clear). The laymen is going to compare a DBC payment network to CashApp, Venmo, Paypal, etc. As DBCs are developed, I think it’s best to keep its current throughput at the forefront of the public.

1 Like

Can someone point me to a guide to using DBCs on the testnet, so I can help in performance metrics efforts?

4 Likes

They are not in a testnet yet (we are in between testnets), but there is the dbc crate we are working on with cli etc. for playing around. They will be in a testnet soon and then we can get some better measures.

Thanks for poking around and please do with, not next testnet but the one after that (should be testnet 7) as that will feature DBCs

7 Likes

And I would expect that if 4B people are using the Safe Network to that extent then there will be at least a few hundred million nodes and from that over 10 million sections.

As David said test are showing on the order of 10K transactions per section.

We see that typically a 1-1 transaction you have 2 sections involved which gives us on the order of 10K * 10M / 2 TPS = 5*10^10 TPS (50 billion)

If we assume that we can only see a 1/1000 of that due to massive bad & unexpected issues (eg lag within section) it is still 50 million TPS and 4B transactions takes 100 seconds.

8 Likes

I completely agree. There’s a lot of nuance to dbc performance. Getting bogged down in ‘the numbers’ might be a little misleading at this early stage because there’s a lot of unknowns.

What we do know though is this is the rough process:

  • a new transaction will be signed by the client using bls keys, which takes roughly between 0.3ms - 3ms depending on the size (source, blst is used for sig/verify).

  • the new transaction signature will be verified by each mint node (individually and in parallel) which takes roughly between 0.8ms to 3.5ms (same source as above). There are 7 mint nodes in each section (the elders). We need to account for network latency here too, which will be much more significant than the verification operation. There’s a lookup to check this isn’t already spent, but I expect that to add negligable time to the operation compared to the crypto verification.

  • the verified client transaction will be signed by each mint node (individually and in parallel). This will take another 0.3ms - 3ms, plus the network latency to return the mint signature to the client.

  • the client aggregates the mint signatures. I don’t have exact performance stats for aggregating 5 sigs (5 of 7 mint sigs are required) but it will probably be around the 5ms to 500ms mark (source but these are javascript stats compared to rust stats above).

  • the client sends the new dbc produced by aggregating the mint signatures directly to the recipient

So the sequential operations are: one signature op, seven verify ops (parallel), seven signature ops (parallel), one aggregate op, taking maybe half a second total; network latency would probably be around half a second. Adding this up I’d guess any one transaction takes about one second from start to end.

On the side of ‘in reality it will be faster than one second’:

  • network latency is variable so it may end up being faster than this estimate.

  • client and node cpu performance is variable so it may end up being faster than this estimate.

  • optimization for bls signatures may make this much faster (eg fpga or asic or custom future cpu instructions, current fpga results are 3x faster)

  • there’s a lot of parallelism possible in this flow. The client can do multiple transactions in parallel, the mint nodes can do multiple verifications and signings in parallel, and there are many mints (one mint per section) all running in parallel. So we need to understand how the parallelism feeds into the performance metric. As others have said, having lots of sections will enable a larger parallel throughput (ie better total tx per second) than having fewer sections.

  • there may be other cryptographic optimizations that improve throughput (eg rollups like eth2 is doing)

On the side of ‘in reality it will be slower than one second’:

  • network latency is variable so it may end up being slower than this estimate.

  • client and node cpu performance is variable so it may end up being slower than this estimate.

  • mint nodes (elders) have many other duties than just verifying and signing dbcs, so they may have some extra latency from that other work if it takes priority.

  • there may be additional crypto operations such as range proofs or ring signatures depending on the level of privacy being implemented in dbcs that could slow this down.

It’s great that people are interested in performance but there’s a lot of subtle nuance to the topic and I feel quoting numbers and scaling factors is misleading because we don’t really know for sure very much right now.

The best way to look at it is to really understand the basic operations being performed for a transaction and then scaling that baseline performance up depending on your individual expectations of parallelism / latency / future improvements for SN.

If you want to really get into the weeds on this I recommend playing with sn_dbc_mint which is a manually operated dbc mint, it’s really cool! GitHub - dan-da/sn_dbc_mint

13 Likes

We could probably work out a big O minimum time for each stage, then layout parallelism and try and show the theoretically fastest time? It would have to be in stages, but ignoring clients merging/sending etc. just the mint work.

Then look at what should big O be in reality so some measurements of network performance and latency to say the number is between X and Y.

I suppose X could range from 5 to 20,000 though and then folk may split on the 20,000 camp and others on the 5 camp?

Perhaps the ultimate answer is when the network is up is it coping with all transactions without any perceivable delay for humans? Our neat thing IMO is the network scales TPS with network size which should hopefully match expectations, but we will see.

I am sure if you ask a visa user what speed it is they would say, dunno but it’s fast enough!

6 Likes

Laying out the details is the right thing to do though that won’t be good for sensationalist headlines. To get an average and repeat and publicize that so it’s a number people run with would maybe satisfy a level of honesty and journalistic sensationalism?

4 Likes