BLS signature aggregation performance

BLS has a nice feature where it’s very cheap to aggregate signatures before verifying them. This makes verifying many signatures faster than doing each verification individually.

How much faster? Depending on the situation, up to 10x faster, but more likely about 2x faster.

Running the script at the bottom of this post gives these results for aggregating:

100 individual sig verify took 6445 milliseconds
100 msgs with aggregated sig verify took 3606 milliseconds
1 msg with 100 aggregated sig verify took 586 milliseconds

The first result is just doing the naive 100 individual signature verifications, no aggregation.

The second result is when we have lots of different messages, we can aggregate the signatures, but still have N messages and N public keys. This is the most likely situation on Safe Network since nodes are relatively constant but messages are highly variable.

The third result is when we have an identical message but many signers, we can aggregate both the signatures and the public keys. This is like if we have a N participants in a multisig transaction, we can aggregate all N signatures and N public keys into a single sig and pubkey before verifying. This is the fastest but is very uncommon for Safe Network nodes to encounter this type of situation.

So there may realistically be up to about 2x performance benefit when nodes can aggregate many signatures for many messages, but I’m not exactly sure how this would work in practice. I was mainly interested in how much possible improvement we can have, not how to utilize it. If it was 1000x faster we’d probably push very hard to find a way to use it, but 2x, maybe it’s not so critical to find ways of using it in the short term.

Note this was done using js library noblebls, not the rust library threshold_crypto or blst, but it gives us an accurate measure of the relative performance which is what I’m interested in.

threshold_crypto only seems to support aggregating for a single message, which is less common on Safe, see combine_signatures which only works .

blst has some examples test_aggregate and test_multiple_agg_sigs, but noblebls was simpler so I did that!

The script to run these tests yourself:

// Open https://iancoleman.io/eip2333/
// Paste this code into the developer console
// Output is three console.log lines

prvs = [];
pubs = [];
msgs = [];
sigs = [];

let totalNodes = 5;
let totalMsgs = 100;

// create keys
for (let i=0; i<totalNodes; i++) {
  let prv = libs.noblebls.utils.randomPrivateKey();
  prvs.push(prv);
  let pub = libs.noblebls.getPublicKey(prv);
  pubs.push(pub);
}

// create messages
// and signatures
for (let i=0; i<totalMsgs; i++) {
  let msg = new Uint8Array(32);
  window.crypto.getRandomValues(msg);
  msgs.push(msg);
  let keyIndex = i % totalNodes;
  let sig = await libs.noblebls.sign(msg, prvs[keyIndex]);
  sigs.push(sig);
}

// Verify each sig individually
singleVerifyTotal = 0;
start = window.performance.now();
for (let i=0; i<totalMsgs; i++) {
  let keyIndex = i % totalNodes;
  let verify = await libs.noblebls.verify(sigs[i], msgs[i], pubs[keyIndex]);
}
end = window.performance.now();
individualSigsMillis = end - start;
console.log(totalMsgs + " individual sig verify took " + individualSigsMillis + " milliseconds");

// Gather public keys for aggregated verification
apubs = [];
for (let i=0; i<totalMsgs; i++) {
  let keyIndex = i % totalNodes;
  apubs.push(pubs[keyIndex]);
}

// Verify aggregated signature
// for multiple different messages
// including the time to aggregate the signatures
start = window.performance.now();
asig = libs.noblebls.aggregateSignatures(sigs);
await libs.noblebls.verifyBatch(asig, msgs, apubs);
end = window.performance.now();
aggSigsMillis = end - start;
console.log(totalMsgs + " msgs with aggregated sig verify took " + aggSigsMillis + " milliseconds");

// Verify aggregated signature
// for the same message
// using many nodes
totalNodes = 100;
pubs = []
sigs = []
let msg = new Uint8Array(32);
window.crypto.getRandomValues(msg);
for (let i=0; i<totalNodes; i++) {
  let prv = libs.noblebls.utils.randomPrivateKey();
  let pub = libs.noblebls.getPublicKey(prv);
  pubs.push(pub);
  let sig = await libs.noblebls.sign(msg, prv);
  sigs.push(sig);
}
start = window.performance.now();
agPub = libs.noblebls.aggregatePublicKeys(pubs);
agSig = libs.noblebls.aggregateSignatures(sigs);
await libs.noblebls.verify(agSig, msg, agPub);
end = window.performance.now();
aggSigsMillis = end - start;
console.log("1 msg with " + totalNodes + " aggregated sig verify took " + aggSigsMillis + " milliseconds");
8 Likes

This might be an ignorant question @mav: Do you know of any work around BLS towards fast efficient threshold signatures schemes? Similar to the work Fusion commissioned on ECDSA.

Perhaps it is not even possible or not efficiently enough, but if it were possible and section elders no longer held the entire private key then it would reduce a few threats such as key selling.

2 Likes

To sort of answer my own question here, it looks like the Helium p2p wireless network has been working on this as part of their Proof of Coverage consensus algorithm, with some extra work extending Poa Networks threshold_crypto DKG utilities.

Not sure if useful for solving SNs key selling problem though…

4 Likes

Sorry this fell off my radar! Short answer is no I don’t know beyond what is already in threshold_crypto and blst et al but I will read up on the links you’ve posted.

1 Like

No worries, it was a bit of a silly question!

This was the most interesting link:

We were unable to expand the Testnet Consensus Group past 40 nodes as elections would begin to fail around that group size. We found that the existing DKG with curve SS512 is the primary culprit here. This becomes a greater risk to network security as the network grows to 100s of thousands of nodes and more users expect network stability and security.
On the Testnet, with these changes, we were able to exceed a group size of 60 and are able to run stably with a group size in the mid 50s. Since these protocols do not scale linearly (it’s more like cubic scaling), this is a significant improvement.
We don’t believe that the alternative of doing nothing makes sense either. We also would like to avoid creating a new library ourselves and haven’t been successful in finding an an acceptable alternative that meets all of our criteria.

2 Likes

Superb find, this looks very helpful

2 Likes