Update 23 September, 2021

Thanks for the wonderful update. I truly appreciate all of the hard work the entire team has been doing. :racehorse:


Awesome update. Although he’s a long time super solid member of the community, seems like @mav is in the running for a bottle of scotch in the mail for this particular effort.


We will need to make the cost of an entry in the SpentBook (I think around a hundred bytes) lower than the computational cost a client spends to create a transaction.

If this is not the case, adding a small POW might suffice.

Another solution, of course, would be to charge for this immutable data.


I don’t know how far off this project is, but it can’t come fast enough (as I say in response to all the censorship of Facebook and elsewhere).


I wonder if a time-lock stake might work here. Network retains a locked amount for x number of events, before returning it. Keeps zero fees, but prevents continual attack from low-funded bot accounts.


The address space is 2^256 and it would be like creating 2^256 bitcoin addresses to overflow that or get other folks secret key. So we are OK there.


Is @Sotros25 still handling media relations? Jason Stapleton wants to interview somebody from the team.


I think the spendbook of DBCs is very important because there is only one state in SN and this is directly linked to double spending. And this is because new mints should be able to determine that the spendbook is correct.

Structurally, a SB (spendbook) can consist of 1) some clients, 2) a special area of the CRDT, 3) mints, or 4) a combination thereof.

I guess from this that the SB is stored using the immutable function of the CRDT. Because the clients and mints cannot store SBs immutably.

In my opinion, this is the revolutionary changes in asset transfer services. This is because this seems to stagger the consensus problem due to the state machine replication. Thxs for @dirvine & @maidsafe team…


amazing!! :+1: … Thanks for @mav


amazing !!
I was very surprised that so many amazing ideas were hidden.

But when Sam sends 10DBC to Janet, is it sent to Janet’s ‘owner_key’ or ‘spend_key’?


Quick question, if for some reason the client crashes (user’s device crashes, runs out of power, etc) after the spendbook is finalised and before the final output DBCs are formed, has the user lost their funds?


Looks like a hack.
If PoW is high, users with cheap devices will suffer.
If PoW is low, it will take not a day to crash network, but a month or year instead, which is better, but not perfect.

Additional transaction to make transaction possible which will require another transaction? :slight_smile:
It will be possible, but tricky.
Also who will be the recipient of the fees?

Bot #1 waits for events from bots #2, #3 and #4?

Your answer makes me think that problem is real.

Not sure what you are saying here … practically impossible to control a section, so not possible for bots to cooperate if that’s what you are suggesting.

You have it backwards, his answer is that it’s not real. Get your most powerful calculator and attempt to do 2^256 … we are talking atoms in the universe here.

The only issue IMO is that spendbook takes up space on the network so the ‘attack’ is just to increase overhead cost of the network by doing meaningless writes to spendbook. Hence some cost/burden needs to be added to the client to inhibit unlimited writing.


We are talking not about adding nodes, but about adding clients.
There can be infinite amount of them and no ageing protection exists for them.

This is what I am talking about. Not about atom count in universe.

Which means it will be still unlimited, but slower (if you mean PoW).

1 Like

Which is why adding a cost to the client slows things down.

I was proposing a staking mechanism. I don’t know if it’s possible to implement efficiently though as the point of client doing spendbook operations is to reduce load on nodes, but adding a staking mechanism may require a node again - same with PoW I imagine.

However slowing the process down is probably sufficient as storage space on the network is pretty vast relative to spendbook size. As long as growth rate of network sufficiently outpaces growth rate of spendbook then we are okay.

1 Like

One more thought:
PoW tuning should consider not only existance of individual users, but also existance of services (like exchanges), which theoretically can be implemented with network mechanisms.
Delays without PoW will require such tuning too.

1 Like

Maybe batch writes to spendbook might be possible to increase speed for exchange or similar. So larger stake overall for a batch but only same fixed delay – so batching means more throughput / time.

edit: the idea here is to prevent same DBC being split and remerged repeatedly at high speed, so in batch you can do multiple different DBC’s (for higher stake), but you could not do the same DBC in a batch operation, so still okay.

edit2 … I don’t know really, too hard a problem for me – steam is starting to come out my ears here thinking about the fungibility of DBCs and all of this. Wondering if spendbook can just be limited in size (guess it can’t if it’s immutable) - and how likely is that to break everything.

edit3: IMO, after more consideration, all of my thoughts here seem worthless. I suspect if there is a real concern of creating too much data in the spendbook after all else … then a transaction fee (err, spendbook write fee) is the ultimate solution … I hate it, but maybe nothing for it.

Split-merge-split is the first operation, which I thought of.
Next one is split-split-split.
0.0100000 → 0.0099999 + 0.0000001
0.0099999 → 0.0099998 + 0.0000001

1 Like

Too much data in spentbook is one of the problems.
Another one is limited network ability to handle transactions.
On one hand, slow network will limit spentbook growth speed.
On other hand, if 99% of network resources will go to serving split-split-split-merge-split-merge-split bots, then real users will left with almost nothing.

1 Like

Hat is in the ring. We’ll see what happens.