Next step of safecoin algorithm design

That sounds impossible. Latency is a single number. No matter how clever an algorithm is, it still can’t work with information that isn’t present.

EDIT I checked out the suggestion you linked. It ties into the race idea I proposed because the section should agree on who was first, and that would depend on the time the answer reached all the different nodes, from different places on the planet. Unless the dynamics of the method pushes nodes far from the “center” out of the section by punishing them more and more as more and more nodes are left from closer to the center… Juggling with all these interdependent things makes my head hurt.

3 Likes

Seems like a way around this would be to couple the maximum vault size to nodal age. For example, infants might be limited to 8GB vaults, and this limit would double with each nodal age increase.

I often thought that vaults increasing by unit sizes or even small fixed vault size that a user could just spin up the number of vaults required to fill the desired amount of local disk space. (ie.
1 Chunk = 1000000 Bytes, 1 Vault = 100000 Chunks, 1 Bank = 10000 Vaults, 1 Monopoly = 1000 Banks etc.)

But as you’ve pointed out, future proofing would seem difficult and vault sizes should be maximized since it would seem that bandwidth infrastructure will not grow as fast as storage capacity. The point that neo made about the number of connections that a typical router can handle is a good example of just one of the issues related to running many small vaults.

Thinking about making vaults as large capacity as possible for future proofing, let’s go to the other extreme. Is there any real reason that vault sizes need to be specified at startup? Can’t vaults just be asked to store chunk after chunk until they fill up? Of course if a vault were to actually fill up and unable to store a chunk it was given it would be penalized (or terminated?). Why not just assume that the network has a 2^256 vault limit, and all vaults have a 2^256 Chunk limit, with a currently allowed size based on network needs and nodal age? To infinity and beyond, right?

3 Likes

Two independent proposals mean this is how it should be done :slight_smile:

Sections could agree on the size assigned for new infant vaults so, together with doubling the size as vaults age, it’s future proof.

It would be easier on the section if vaults sizes are known:

  • simple to assign the right number of chunks to each vault,
  • known fill rate for the section and, in turn, the entire network,
  • easy to decide when to let a new infant vault join,
  • infant vaults, if they start out small, can’t make much trouble,
  • (probably some more).

It would decrease the complexity of a core component a lot.

2 Likes

To keep it simple: if you have 9 + 1 = 10 ping-‘races’, with 9 the number starting in Europe and 1 ping from Australia. And you have 10 + 2 = 12 Vaults competing, with 10 Vaults in Europe and 2 in Australia.
In that 1 ping from Australia the 2 Australian vaults will be considerable faster, in the other 10 pings the vaults from Europe.
If the ‘algorithm’ should take the average, the Australian vaults will be severely punished. I would let that 1 ping where they were faster count more. The fact that the difference between the 2 first and the other vaults in this particular ping test is much bigger than the other tests should help with that.
You probably meant something similar/the same.

2 Likes

I meant the race in the context of presenting proof of resource but yes, very similar.

2 Likes

Sorry, I quickly skimmed the thread and didn’t notice your comment.

Yes, indeed. I understand this. Just wanted to pose the question about infinite limits to try and force the thought-stream outside the box. Consider the last time we discussed this…

Thinking about it a little more, you couldn’t penalize a completely filled vault. Instead, I think one would need to use this “no more space condition” as tool to map out the current physical limitations of the network.

Consider the following thought experiment :

  • User A - Smart phone based vault with 16GB storage capacity.
  • User B - Desktop computer with a 8TB redundant disk array.
  • Initial vault limit - 1GB
  • Age based limit increase rate - 10x

Both users join the network, relocated, and handed 1GB. After user A attains a nodal age of 3, the network will assume that they can handle up to 100GB of storage, which is beyond the physical capacity of the device.
After user B attains a nodal are of 5, the network will assume that they can handle 10TB of data, which is beyond the capacity of their device.

The network will then fill each device as needed based on PUT demands. User A will soon let out an “out of space” error to the vault managers. The vault managers would then note the size of that vault then stop sending puts to this device. It will take a bit longer for User B to reach the same condition, perhaps before it even does, User B continuously adds more storage space to their machine so that the limit is never reached.

If an out of space condition occurs, the vault managers don’t need to stop sending puts to these vaults indefinitely. Perhaps once a vault limit has been reached, the vault managers will not make another attempt to send a chunk there until the same
number of chunks currently stored on the out of space device has been sent to other vaults in the section. Multiple out of space errors from the same vault could lead to compounded wait times (current_wait = 2*previous_wait), which would cut down on doomed PUT attempts.

One would think that vaults could just tell the vault managers how much space they have available up front for dynamic sizing, but the network wouldn’t be able to trust this without verifying anyhow.

3 Likes

I’ve always thought that this problem is underestimated.

If the original Disjoin Sections RFC is maintained, the number of connections a vault must manage increases as the number of sections grows (a new section connected every time the total number of sections doubles).

If we use very small Vault we increase the number of connections by two ways, by increasing the number of connections per section and by increasing the number of Vaults per computer.
If managing several Vaults was already problematic, increasing the number of connections per section makes it even more complicated to manage. It is very doubtful that most home routers could manage so many connections.

4 Likes

The size increase would be just an offer that vaults could take up on or not (see my post in the previous weekly update thread). I agree that demanding more storage is not acceptable.

1 Like

This is not the case AFAIK. There is no reason that each node must maintain that connection. It only has to know how to connect.

Also if one connection per doubling then that is not many connections anyhow per section (problem with many many nodes/vaults). But its more a tree structure and the sections above the node’s section does the connecting to its parent section. And there is the usual suspects of neighbours that the node *may* connect to.

2 Likes

And taking into account what you said about younger nodes relocating sooner.

I would think the simple solution is to use the ageing with some minor changes.

  • ageing continues from 0 to 256
    • zero is either a node in a queue waiting to join or is the infant.
    • One if queuing occurs is infant.
  • Each age level of infant, child, adult, elder occurs at set age points. It would be possible that when a section is small that these set age points are lower than for a large section.
    • For example 1 == infant, 2 == child, 8 == adult, 32 == elder
    • A node can only become an elder if the node is at or above the elder age and there is a need for an elder.
    • For a section with near max elders then the set age for elder could be 50, so set ages could be dynamic (within bounds) depending on the makeup of the section.
    • Just because a node is above the elder set age does not mean it becomes an elder, it will only become an elder if needed
  • now the relocation part A node is only flagged as needing relocation once it reaches a set age level.
    • So an infant is flagged as needing relocation when it reaches the set age for a child.
    • and a child is flagged for relocation when it reaches the set age for adult
    • and an adult is flagged for relocation when it reaches the set age for elder, even though it may not become an elder and wouldn’t anyhow since its set for relocation. Really an elder only comes from adults who past the age for elders sometime after they have been relocated. No good making an adult an elder only to have it relocated almost immediately.
    • and you could even have a set age for when an elder is relocated like say 64 & 128 & 196 & 255 when elder set age is 32.

This way it is not tied to any time in hours, but to section events.

It simply uses the ageing mechanism to determine when to flag relocation

Any flagged node can be relocated. There maybe reason not to do a relocation immediately. eg not enough nodes in the section at adult level, so delay relocation till there is enough.

6 Likes

Great list, thanks!

I’m curious if someone were to manufacture a specific home router for handling the needs of many small vaults, where would the next bottleneck be? The ISP? The physical home-to-exchange infrastructure?

I think if farming is popular and depedended on many small vaults people would gladly buy a widget that improves their vaults because it earns them more money. I think you make valid points about the difficulties of small vaults, but the issues are also subject to the exponential growth of technology (maybe eve moreso than others since many haven’t yet been targeted for optimization).

Does IPV6 also make a difference to the feasibility of many small vaults?

And send those puts somewhere else instead? How does this new location / redirect get recorded so the chunk can be found in XOR space? Isn’t the point to be at the closest address? Seems very hard to coordinate.

How often do / should section events happen? Can they / should they be controlled (within some bounds)? What happens when section events happen extremely quickly or slowly? I think ‘section events’ is just a second-degree-of-separation to ‘time’ that complicates the reasoning. I’d love to see a totally freeform parameter for this but think it’s not practical and some boundaries will need to be set.

I don’t mean to introduce time into the algorithm, but I do think a sense of ‘timing’ is important. High growth periods will demand fast changes and if they’re too fast they might be achieved by unintentionally exclusionary forces.

Isn’t this what already happens - age only increments when a relocation happens (ditto relocation only happens when age increments, same thing). Am I missing something in your proposal that is different to the current ageing/relocation algorithm?


Another aspect to this topic is the chicken and egg problem. People need coins to participate but they can’t participate without coins. Using vaults to earn safecoin is an important way to expand the network in several ways (not just storage).

I would hate to see app developers find they can’t get traction because nobody has safecoin in the first place so the app developer decides to pay for their users upload and thus the app developer owns the data not the user and hey we’re back to the old internet but maybe even worse since data is permanent.

There’s immense power in making vaults ‘the easiest on-ramp’ and clearly bitcoin and blockchain have failed in that sense. Mining is not the easiest on-ramp for those ecosystems.

6 Likes

One place is the operating system of the computer the masses of vaults would be running on. There is the limit of 65K port numbers. But remember that home routers are usually based on a cut down version of linux and so one expects that even a full version of linux has limits to open connections before performance degrades.

Memory is another resource that would affect the number of nodes that can run. Imagine 1000’s of node/vaults and paging occurs. Not going to get much performance out of that machine are you.

The other is simply bandwidth to/from ISP to you. Even on NBN with 40Mb/s uploads and hopping/caching occurring, you are not going to get much performance out of each node/vault if you have 1000’s are you.

I expect that they occur as fast/slow as they do now in the code. Or will be with PARSEC. I expect that there is a logical minimum ave time due to comm speeds/latency between the elder nodes. Latency is not something faster connections will fix.

My consideration was that you wanted at least a decent time between relocations and so if one section has adults being relocated at 10 days (very fast event rate) and another has 25 or 40 days (slow event rate) then that was going to satisfy.

Thus my reason for dynamic (within bounds) of age to flag a relocation. This allows higher rate of growth when needed and slower when things are more stable or the section is of sufficient size.

Obviously the parameters for the dynamic operation would need to be determined.

Not the same. Now, if I understand what you told me, when the node is relocated it is aged and that age is infant → relocation → child → adult → perhaps elder

Whereas what i suggested is that instead of relocation when the node is selected for ageing, that the node ages one age and that does not mean going from one age state to another (infant->child->adult->elder). This way age can be used to trigger the relocation with much more accuracy.

Said another way, instead of each age increment causing a relocation and age state increment, it is 255 age increments and relocations only occur on a few of those and the age state only occurs on a few of those.

I feel that we keep ignoring the purchase outright of safecoin. We need simple methods for people to purchase one safecoin without the overheads of an exchange (decentralised or not) or high fees because people are greedy. EDIT: and before the cries of But when safecoin is 100$ ea I’d say that division is implemented and one could easily purchase smaller amounts than a whole coin

But yes we need to see vaults earn at a reasonable rate so that a new person can set up a vault and earn a coin in a reasonably short time period.

I’d love to see a free phone APP that uses in APP purchases to allow people to buy a coin super easy. Thus an ID is supplied to the APP and you pay for the coin and its sent to the ID on the SAFE network allowing initial operations by the person.

7 Likes

Great to see this thread diving deep! This project has been blessed with with wise supporters and contributors!

6 Likes

Absolutely! The minimum denomination size definitely plays into this too. It would be great if pennies can be earned, rather than just pounds.

Arguably, missing this in the Bitcoin world has empowered exchanges, banks and governments to manipulate access to coins.

1 Like

One big blocker we have found is an OS limit on the number of open files (sockets being files). There is some things that can be done, such as multiplexing, but it is not as easy as it sounds.

It’s an interesting area as well, just for info, when SAFE was first conceived, I actually pondered for a while to write the vaults like a virus. The notion being it would spread across the net and just create the network, of course, there were a ton of negatives, but it was an interesting idea.

In some ways, yes, but even ipv6 will traverse NAT and currently ISP’s will probably want to make sure that stays the same, it allows blocking etc. This part of the net has amazed me, where either government pressure or very bad OS security or human complacency has never fixed the “everyone is direct connected”.

Again if these events were linked to data stored/accessed then it is probably the best way to measure the sections needs. Not simple, but I think on the right track. This is linked to this:

There is a strong notion and push to not have safecoin as data items, but as integers in the client managers. Then the disability is very simple.

What would be really nice is a way to have accounts very simple to set up, some kind of credit for storing stuff that is not junk, or perhaps some human provable interactions, like surveys or something. None of these are simple, but its always a thought.

In terms of vaults, it would be brilliant to allow them to just start automatically and earn quickly, I would think this will be possible though, if such vaults were handling lots of client interaction, getting old data that may be missing from the section, or confirming all members have the data they should and so on. I see these actions as value and therefor should be rewarded. So starting a vault that perhaps does not really join, but does a load of work for say 4 hours can earn enough safecoin to create a single account perhaps.

These areas do need to happen, whether for V1 or not, I am not sure, but if we as a community can find those answers then coding them is not a problem, if the design is done and the algorithm worked out with any side effects (hopefully none) known, then the code part is just the mechanical thing we do and the devs can do really well and quickly.

10 Likes

According to Disjoin Sections RFC:

The routing table needs to keep track of not only which peers a node is connected to, but also which sections they belong to. And the network’s structure is defined not by the current set of nodes alone, but in addition by the sections which currently exist. An address belongs to exactly one section if and only if exactly one of the address’ prefixes is the prefix of a current section. So to define a partition of the name space:

  • No two sections must be comparable: If S(p) and S(q) are different sections, then p and q cannot be a prefix of each other - they must differ in at least one bit that is defined in both of them.
  • Every address must have a prefix that belongs to a section.

and

The invariant that needs to be satisfied by the routing table is modified accordingly:

  1. A node must have its complete section S(p) in its routing table.
  2. It must have every member of every section S(q) in its routing table, for which p and q differ in exactly one bit.

So the routing table of each vault in a section will be, approximately, the average members of a section multiplied by the number of bits of our section. Remember also that message routing involves relay the message to the route-th closest entry in our routing table.

In fact, the Disjoin Sections RFC itself highlights, as a major drawback, the growth in the number of connections that a node must maintain.

4 Likes

BUT no need to maintain the connection all the time. It is not likely a node needs to talk to all at once. A routing table does not equate to a list of current active connections

EDIT:

Maybe it does. Doesn’t sound like it needs to though.

2 Likes

Oh is this new? So maybe no Mutable Data type anymore?

6 Likes

Any node must be capable to communicate to the rest of nodes, in the routing table, at any moment.

Specially the secure message passing implies that any message can be redirected to any section of our route table. As the data are uniformly distributed, within the XOR space, all the connections, of our route table, have the same chances to be used.

This is why I don’t think that the network could work efficiently without keeping all connections, of the routing table, active.

6 Likes

I haven’t heard this, push from?

Sounds like a can of worms to me - if a PUT balance had a max per account, the incentive to attack and modify it would be limited. But if the Safecoin balance were a ledger value, I don’t think you can limit that per account and so the incentive becomes enormous.

5 Likes