Update 23 September, 2021

This is the nice feature here. When the SpentBook is written the client can at will and many times get the transaction approved, so can crash etc. and just retry. As the mints hold no state on this they will sign it as that what is in the SpentBook. So no problem if they keep approving the transaction many times.


I don’t think this can happen as spentbook writes are done by the client directly, so elders are not tied up with this … the Internet is pretty fast for tiny data push-pulls.

Who is this guy? I googled, but a common name.

1 Like

My guess would be this guy https://twitter.com/jason_stapleton?lang=en


Directly client can write only at his HDD.
If he use nodes, he use also node’s resources.
CPU + network + disk (I/O speed I mean, both as IOPS and Mb/s) I guess.

Most of the work is done by the client:
.-Generate the transaction together with the spend_key
.-Sending it to be written in the Spendbook.
.-Send the transaction to the Mint (elders)
.-Wait for at least five Mint (Elders) to verify and sign the transaction and return it to the client.
.-Aggregates and forms the final output DBC

It is not possible to generate a new transaction until these steps are completed so it is unlikely to be able to saturate a section.

If you have sufficient resources, you could have thousands of clients making transactions, but both. the work and the saving of Spendbook, will be spread across the different sections of the network.

It’s obviously an attack to consider but it doesn’t seem as simple as you make it out to be.

1 Like

nodes, yes, elders no. They are still working on all of this, so we can’t expect hard answers here yet IMO, but they have said they are working on the ability for clients to write directly (not via elders, but via the network) to spendbook, so I presume that means without consent of elders. Hence I don’t think there will be any meaningful bottleneck here … I suppose we will have to wait for testnets and test-attacks to see though.


When client count is not limited, thinking about limits for each client makes not much sense.
Client should not be a full-size binary loaded in memory.
It is enough to have a single function implemented in it, like “Sending it to be written in the Spendbook”.
It is not needed to complete full procedure as original client will do.
Most resource consuming step can be chosen and repeated.

Benchmarks should be fine too.
It is good to know how many spendbook writes section can handle and how many of them single PC can generate.

Needs to be checked by the way.

That the network allows an almost infinite number of clients is irrelevant. Managing a client to generate transactions, as a network attack, has a cost (computational, energetic and monetary) and that is what has to be calculated.



Clients can’t be banned.
Clients can’t have limit on operation count per second.
So I still think it is important.

I agree.
Without PoW costs will be low I think.

And to compare node costs to process request I predict will be higher.
Needs to be calculated too.

If load is forced to be spread across sections, it is good.
But I doubt it.

It should be forced to be distributed.

It is as the SpentBook is across all sections. Right now with multiple input DBCs then nodes are checking across sections and that is cool. One area @danda is also poking around is the cost of a section checking remote section sigs. At the moment that is node work, but need not be. We can have the client retrieve the ProofChains required to prove the transactions, so give the current mint node all the info it needs and do so crypto correct. I see this possibility

client → mint - sign tx please and here is remote section ProofChain
mint → client - Ok or ProfChain not enough, I have X as the remote key, so please try again.

Then the client has to get the ProofChain and not the mint node.

In this way the client writes SpentBook and also gets the approvals required for the mint to do the re-issue. All the mint need do is check the SpentBook (so retrieve values from across the network, but that is a simple Get), Check the sigs of the tx, check the sigs of the DBCs themselves and sign. This contains all of the process to the mint node checking locally for sigs and also check the SpentBook and the client needs to retrieve all the info required.

This means clients will have to get the valid sigs, keys and have the correct tx as per SpentBook and nodes just check SpentBook, check sigs and sign.

Hope that helps.


I see “written by Client”.
What number determines which section client will write spentbook entry to?

The address of the SpentBook is the SpendKey and that is random accross the address range. That means is will evenly distribute across all sections.


Who is calculating that random value?

It’s deterministic


Is dbc.owner.derive_child dependent on private key, which is selected by user?
Is dbc dependent on private key, which is selected by user?

You can derive the pub and secret keys from a base key. This is how it works.

THe DBC content is selected by a users pretty much in the same way a bitcoin transaction is. So you Pay X->Y And that is what your DBC will contain.

I think you are trying to see if there is a way to force a SpentBook entry into a particular section. I doubt you can with ease as you need the out DBC to hash to something that you then spend. That’s hash collision territory, so crypto hard but we don’t have a section p[er address and a section will cover billions of addresses, so you could with effort make DBCs fit a section.

It would be a lot of work and more so as the network grows, for almost zero return, so you may get some transactions in a section (consider a 2 section network as it’s easy), but I am not sure what that would achieve? All you would do is have SpentBooks in a particular section and the network would not care as SpentBook is a cheap lookup and very cheap write in terms of work.



So generating N (section count) times SHA3_256 (or something like that) takes more resources than receiving, validating and writing SpentBook entry for M nodes?

1 Like

The point is the client is doing all of that with the exception of reading the SpentBook for any request.

So the mint’s role here will be that of a validator as opposed to controller.

If folk wanted to overwhelm the network then doing many many Get requests would be easiest (although we still have message priority to help there) or doing many Put requests (which grows and strengthens the network).

So I suppose it’s all a balance. However a thing you will like is that the new message flows as @chris.connelly and @chriso worked on after @bochaco kicked that off allows us to better understand the “cost of each flow” in terms of data and hopefully CPU (sig validation is cpu intense, although @mav is really making huge inroads there).

That will allow us to tabulate each cost and better initiate message priority. The issue here, always, is amplification attacks. That is where clients can request the network do more work than the client does in a way that it amplifies the work across more nodes (like section → section agreements, which we have killed, thank God). That is the key place to check, but client will always browse freely and there is always gonna be a number of them that slows the network down or even pauses it.