Next step of safecoin algorithm design

Oh is this new? So maybe no Mutable Data type anymore?

6 Likes

Any node must be capable to communicate to the rest of nodes, in the routing table, at any moment.

Specially the secure message passing implies that any message can be redirected to any section of our route table. As the data are uniformly distributed, within the XOR space, all the connections, of our route table, have the same chances to be used.

This is why I don’t think that the network could work efficiently without keeping all connections, of the routing table, active.

6 Likes

I haven’t heard this, push from?

Sounds like a can of worms to me - if a PUT balance had a max per account, the incentive to attack and modify it would be limited. But if the Safecoin balance were a ledger value, I don’t think you can limit that per account and so the incentive becomes enormous.

5 Likes

It is :smiley: However if you look at it like this. A section cannot be overtaken or the design should ensure this is the case, then it is no problem. Then we look at how a section can be under threat and how the neighbors can mitigate this and it becomes more interesting. Also if the client manager group for that particular wallet was also able to prove the account is valid (i.e. all last transactions (from) are maintained) then it adds more checking as per the data type.

This part is very interesting as we have always said coins should be secured every bit as much as data and if we have secured the data, then we will have secured the coins. So for data we do an extra step, we store the identifier in a data chain. If we did similar with wallet balances so they are secured and private then we may be able to say we have secured client manager account info and can make safecoin a balance item in those client managers, who must all agree.

It is quite subtle but the deeper you look the more likely it is to be solid. When you think of it, a put balance is held by client managers and if you could control that you could put a massive put balance in client accounts illegally.

5 Likes

This part.
If it can be done with put balance, it can be done with safecoin.
In the spirit of minimising complexity, it seems to me that the inherent problems of that approach, would be needing a less complicated solution than to solve divisibility with data item coins for example.

It was a rather elegant way to control issuance though. Any ideas as to how that could be kept, or would it be scrapped altogether?

7 Likes

I was actually thinking very similar things when waking up this morning, and pondering interesting things in the bed :slight_smile:

With probabilistic issuance of a whole coin, based on work, it’s going to take a long while for a new (and small) vault, to see results. (Well, maybe that part is up for change now?)
Optimal for uptake and gaining popularity, would be that they can join and do work and see results fast.

Maybe if the section would hold a pool from which they pay newcomers, who later return this as they start to earn more. That way we don’t need to involve this special initial case in the safecoin algo, but rather as a part of section joining algo.
Didn’t get past the normal objections of the susceptibility of gaming it and so on though, before going up and filling head with other things.

As I have mentioned before, it might well be that the very initial stage (of a network, or a vault…) needs another tooling,than the later. I think of it as leaving atmosphere. That first part is very different from the very very long travel that then goes on in space. Different conditions, needing different solutions.

5 Likes

That would make me sad because “coin as entity” is one of the primary differentiators of Safecoin, the property that makes it more like cash and less like a bank account. Whether this is an important property is a matter of another discussion but my personal opinion is that yes it is.

I surveyed the forum for proposals that used entity based coins with denominations:

All in all, 2017 was a good year for coin denominations. Maybe it’s time to pick up pace again.

To maintain balance (such a clever pun!) here are two proposals that involve account balances:

5 Likes

How would it be any different than the situation where all vaults are of a fixed known size? Currently, once a vault is full and a new chunk arrives does it just fail? Who gets the new chunks to maintain 8 copies if that happens? It seems like the simplest way to manage things is to just redirect the chunk to a nearest neighbor in XOR. Sounds rather similar to our indirection defense we discussed a few months ago. So, intuitively it would seem that a redirect from the vault that is full to the vault that it next closest to the address would take care of things.

I don’t know the real answer to this so I’m just thinking out loud, it would be good to get some input/clarification from the experts.

1 Like

Or, much simpler, the XOR distance can be divided by the size of the vault and then each vault will get the right number of chunks. If the section knew the size of each vault, that would simplify this and other things.

2 Likes

I assume a pre-(RFC) will follow in due time about this ‘safecoin as data items, but as integers in the client managers’-proposal, if it remains a valid idea of course.
One thing I’m curious about is how will be checked if the maximum number of Safecoins has been reached.

2 Likes

And, to emphasize the implied scope, this includes not just communications with nodes in the current section but also with nodes in the neighboring sections, so a lot of nodes.

3 Likes

That can still be achieved by an array that is 32bits long, this array could split as sections split losing a leading bit at a time but then the whole array is section prefix + what is left of the array. A section would then be in charge of that part of the address space and can be queried if there are any spaces (0’s) left in the array. This is way oversimplified, but you get the idea. It’s just an array the sections need to be aware of and they will be given the array as current and they will populate or delete items (safecoins) from that array.

There are a few things can be done here to limit a sections ability to create coins etc. but also the neighbors will likely be able to also be aware of each neighbor’s array and which coins are farmed.

Even with safecoins as just integers or similar, they can still have an address if we wish. That allows further checking, but may not be required.

8 Likes

We’ll see, the devil is in the detail.
I suspect a ‘Client-Managers’-section/group can divide Safecoins the way it wants like this, if I understand correctly.
And reorder Safecoins with a section split if necessary.
You better don’t have a client with a lot of Safecoins. If a section becomes so small (after a lot of splits) that not all Safecoins of such a client with a big wallet fits in the new, smaller address space, you have to ‘split’ the client as well?
Or have a maximum number of Safecoins that 1 client can have.
Edit: or I misunderstand and the Safecoin integers are in the client managers, but the arrays are not.

3 Likes

This is OK, the safecoins will be from all over the network, but the array would hold what are available and what are not, if that makes sense?

Both would be, one represents the client balance and the array is only the safecoin used/available in that address range.

/brainstorm : remember : Just in case :wink:

5 Likes

The issue that @happybeing post also brings up is an APP that *spends all* the users account balance. Once the APP has permission to PUT data then effectively the limit to the number of PUTs is the person’s account balance. With safecoin then when the current PUT balance is used up then permission has to be gained to spend another coin to get the PUT balance back again.

I know I heard about this idea a few months ago, but just thought when you said the balance is held in the account data (client manager), what happens to having multiple wallet IDs that an account could have if it was MDs for safecoins. Would you store this balance with the ID key pair in the account data, thus allowing multiple balance IDs (Wallet IDs)

I do agree with you here. It certainly a differentiator and nice to have a cash like quality

Transaction load was always the problem. And the ideas suggested for storing multiple coins (actual splitting of coin) in the one MD will mean huge transaction loads after a few years since it is nearly impossible to unsplit coins since so many people have “shares” in the one coin. These are basic problems that cannot be solved by different ways to do it, if you want micro payments (in fiat terms)

Here is another idea.

  • The safecoins are still MDs always owned by the section that looks after it.
  • Each coin has 10,000 fields. Thus the coin can be split into 10,000 parts of varying amounts. All parts add up to one safecoin
  • The fields hold the fraction (decimal format) of the coin that a person has
  • The wallet data structure now has coin address and field number for each portion the user has.
  • when payment is made it can be any amount of a coin and the appropriate PUT balance is given.
  • When payment is made the section will take the amount and add to the free/unallocated field.
  • the scarcity factor can use the address generated and allocate the remaining (unallocated) amount in the coin as the reward success. It is more likely to be a success too, just often less than a full safecoin worth.
  • sending 2.45632453434 safecoin is now easy since fields are used and if needed a field can be split into two fields.
  • There is still only 2^32 coins (MDs)
  • The wallet can be designed so that small values are recombined where possible. Payments to the network will be recombining within the coin whenever a sub coin value is spent. EDIT: it may even be desirable to have an API that allows the user to “spend” from one coin MD to another so that the receiving coin has the user’s two small parts of a coin combined.

Effectively there is 2^32 * 10,000 discrete amounts available and all discrete values are 1 coin or less. This allow micro, nano, even “nano nano” transactions. This solves the chicken-n-egg issue since a Android or IPhone APP can be created to accept fiat for very small coin amounts.

EDIT: also gifting enough for a person to create an account is not such an issue since its likely to thousand’s or much less of a full coin.

EDIT2: It would be possible using this to have reward amounts as say 1/10000 (or whatever fraction) of a safecoin and to give out rewards 10000 times more often.

And this requires very little change to how safecoins were envisioned to operate.

8 Likes

Exploring ‘variation’ with respect to setting targets…

The network is designed to spread load out evenly (load is mainly storage and bandwidth). This is a natural consequence of using a hash to locate vaults and chunks on the network.

In an ideal world the distribution of data is perfectly equal. Every section stores and delivers exactly the same number of chunks as every other section.

However this isn’t the case in reality. It’s important to consider because the idea of ‘stress’ needs to relate back to the reality of ‘what is normal’ and ‘what is beyond normal’.

Using an example, storing 64K chunks across 64 sections, ‘normal’ would ideally be 1K chunks in each section.

But in reality the distribution is quite uneven (tested by scraping news.ycombinator.com and hashing 64K posts to get chunk names).

With 64 sections the smallest section stored 912 chunks and the largest stored 1080 chunks. The standard deviation for storage was 29.1. 90% of vaults stored between 950 and 1044 chunks. 50% stored between 984 and 1019.

So the variation (in this case) is 1000 chunks ± 8.8%. I didn’t expect there to be that much variation.

The distribution changes depending on the size of the network and the total number of chunks, but is always naturally bounded by the equality of hashing.

At what point is the network considered stressed? Or is variation simply not important when considering stress?

Variation in chunk count is just one aspect of overall variation on the network. We should also ask What is the degree of variation to be expected and accepted for:

  • supply of bandwidth (depends on ISPs?)
  • supply of storage (depends on laptop specs vs datacenter specs?)
  • demand for upload (depends on default smartphone camera resolution?)
  • demand for download (depends on … what factors … meme trends?!)
  • inter-vault latency (depends on geographical distribution?)
  • inter-section latency (depends on consensus speed?)
  • vaults per section (depends on xor names of vaults?)

Ideally the network algorithms manage all these fluctuations by an inherently clever design. But understanding the boundaries between normal vs stressful variations might be important. This post is a very basic start at trying to understand the magnitudes of normality.

To clarify my point about ‘variation’ within the context of the OP, if an attacker can cause a stressful fluctuation it should not preclude future participation of normal users (ie there should be a ‘return to normal’) otherwise it could cause irreversible exclusion / centralization. What is the ‘normal’ we are returning to? Is it a target value? Or is it simply ‘the balance’? How long does it take? Why? Seriously tough questions to answer…

There’s no way to avoid the network needing to operate within broad ranges of storage capacities, moderately slow and very fast bandwidths, sub millisecond to hundreds of millisecond latencies, etc… which is somewhat at odds with the ‘equal’ nature of XOR-space design and consensus design. What is the lowest acceptable common denominator and what is the impact? If we leave it entirely up to ‘the balance’ I fear vaults will become exclusive very rapidly.

Can we take parameter design out of our hands and automate them? I think so. But first we probably need some manual guidance that can steer the design of the automation. I’m sitting here feeling ‘this is damn tricky stuff’!


I can certainly see app developers saying “screw it we’ll just pay the safecoin for our customer uploads ourselves and put it in the ‘costs’ column” so the customers don’t have to engage in safecoin to get started. It’d be good to try and avoid that dynamic if possible. But it’s going to be a pretty tempting path for app developers I think. Mechanisms to avoid it (like what you suggest) are really interesting.

Yes the chunk just fails to store (to my knowledge). There’s no redirection of chunks. NotEnoughSpace error in vault:ChunkStore is a starting point down the rabbit hole, I can’t find the handler in routing but anyways that’s my understanding…

I imagine if vaults are full at a fixed size then the section adjusts itself to allow more vaults to join so the chunks are more thinly distributed.

12 Likes

I guess the first attempt is to do a merge, since the most likely reason for no spare space is not enough nodes and probably needs to merge with another section.

Maybe we need a “help” message that a section can send its neighbours for a node to be relocated to that section. Then it might gain a few nodes.

Obviously these things should be done well in advance of critical shortage of space.

Of course the current proposed mechanism is to up the price of PUTs for storing in that section that is running low of spare space. Which is supposed to slow down people wanting to store files at that time.

4 Likes

It’s a multinomial distribution so the variance for each “bucket” is np(1-p), in this case 64000(1/64)(1-1/64)=984.375. The standard deviation (the square root of the variance) is about 31.375 which is very close to the 29.1 your measured. It’s also about 3.1375% of the number of chunks in a section.

In the more generic case (and unless I made a mistake), the ratio of the standard deviation and the average number of chunks in a section is sqrt(1-1/number_of_sections)/sqrt(average_number_of_chunks_in_a_section) and this converges to 1/sqrt(average_number_of_chunks_in_a_section) as the number of sections grow large, which is what we expect for the Safe Network.

If a chunk is 1MB, a vault stores 64 GB, and a section consists of 16 vaults (sorry if the numbers are off), then we have 1 Mi chunks in a section on average, so the standard deviation shrinks to about 0.1% of the section size. That’s not a bad number.

8 Likes

Thanks for sharing this post

1 Like

But wouldn’t that mean that the after the new vaults join the section all the vaults would need to transfer chunks around to make sure the closest vault store the right chunks?

My impression is that as of right now when a vault fills up and throws an out of space error, it will churn. Simple and effective. This approach in of itself would appear to incentivise vault operators to start new vaults with as large capacity as possible to give them the best chance to become an elder. Not sure if this balances the needs/limitations of mobile users though…

Again, I’m just guessing. Experts?

2 Likes