Eventually consistent global counter ("SafeTime")

How many nodes is a client expected to be connected to at any given moment?

If only one, then we’d have a problem.

Except, that single node would still have to pass on messages from other nodes, and each of those encapsulated messages, coming from a variety of sources all across the network, would still contain SafeTime (unlike their IP address, which gets redacted at the first hop if I understand correctly).

1 Like

@JoeSmithJr have you published your model anywhere so others can try it out?

2 Likes

I will but it needs to be cleaned up a bit :sweat_smile:

4 Likes

Here it is: SafeTime simulation - 58fa6735

EDIT new link with a minor fix: SafeTime simulation - 6acd2429

I work with neural networks so I used PyTorch because that’s the hammer I know best. I added a lot of comments to make it easier to read.

Feel free to play around with the constants at the top. If you raise MIN_PEERS, which determines after how many connections we consider a node “locked on” the network, the results will get much better still.

5 Likes

I must say this has been a fascinating read.

As I see it what @JoeSmithJr is attempting to leverage here is not a specific mechanism but a rather simple function to harness an emergent capability of the networking of the system. Just how dependable and accurate it would be is promising though not proven as yet. An attractive aspect is that (seemingly) all it is really calling for is the inclusion of a very simple data point in all communications with a standard, deterministic appendant compute, and pass it on with all messages.

I’m having trouble seeing the vulnerability that would be caused by this.

Whether it is a better or worse solution (I think it is simply different) than @neo’s md idea is aside from the point.

The md aspect is applicable either way, as external auracles are always possible and will no doubt be desirable.

I think exploring this as an emergent property of a neural-net-type system is very exciting.

5 Likes

That’s an interesting insight. I was using a tool built for neural networks, yet I failed to recognize the original idea could be formulated in that exact framework.

This opens up questions about whether a similar mechanism could be used to run actual neural network simulations distributed on the network. I’m a bit skeptical here, but it’s an interesting direction to explore in the future nevertheless.

2 Likes

Rather beyond me I’m afraid but I see @anon86652309 has had a look in so he should be give an educated viewpoint. (No pressure @anon86652309!)

2 Likes

Thanks @JPL! :laughing:

The main thing I’m liking about this idea is its simplicity. It’s basically using gossip to communicate and increment a global counter, but rather than a formalised gossip algorithm, it’s just piggybacked on the existing message protocols.

It seems like there should be enough randomness in that existing protocol to be confident that no sections or peers will fall significantly behind others, but that’d need looking into a bit further I think. Formal gossip protocols rely on assumptions that all peers can be contacted and will be eventually. We have such a requirement within sections since that’s needed by Parsec. Since Parsec is being used to communicate details about nodes joining or leaving a single section, and we require all neighbouring sections to be informed of any such changes too, I think it’s safe to say that we do have the conditions required to make this work.

I’m a bit sceptical that this could be used for validating contract signing times since we’d probably want such a mechanism to work in every situation, and given the resolution or margin of error of this would vary from section to section and would also vary within a section over a given period, I doubt if it could be used reliably to accept or deny a contract’s signing “timestamp” unless the acceptable margin of error is very high.

Where I see this being somewhat more useful is in adding a little more to the overall network security. If a message is received from a node which shows an obviously invalid counter, then the network will be better placed to block/ignore/punish. Just now, if a malicious node decides to replay a message it received a while back, then it should get filtered out by any nodes which already received that message, rendering the attack useless. However, the old messages only live in the filter for a few minutes. Using the global counter would be another way for receivers to know that a message is very old and should be ignored.

It could also be seen as a sort of network pulse, where if the rate increased we’d know the network was busier. I’m not sure if that’s useful on its own though… we’d probably be more interested in whether it’s busier because of an increase in Get requests, or because of a drop in vaults for example, and there are better ways to measure these sorts of things.

I’m sure there are more use-cases for this in terms of security, and I’d think clients would also be interested in knowing this value too. (Regarding the debate earlier about a client being tricked by a single proxy - that is indeed the case right now, but before long we’ll need to ensure that such malice can’t happen, e.g. by having clients connect to multiple proxies). If it turned out the margin of error was actually small, then I could see this being especially useful to clients.

Just my own two cents of course! :smile:

15 Likes

Thanks for looking into it!

As a comment about contract signing, I think it’s usefulness depends on the lifetime of contracts compared to the time between the slowest and fastest point in the network. My simulation shows that even the slowest node is no more than 2 seconds behind as it has at least 5 peers and it receives 2.5 messages a second in average. More importantly, gaining higher confidence is as easy as building up a few more connections.

But I expect people will come up with other interesting use cases.

2 Likes

Not at all - thanks for posting the idea! :smile:

Using the counter as a measure of real time would likely be opening a can of worms, and I’m not sure if anyone would want to measure the lifetime of contracts in anything other than a steady clock? If I imagine the counter as the network’s pulse which like our own can rise and fall, then I wouldn’t ever see a use to try and measure a duration by counting heartbeats. If the pulse never really fluctuates much, or it doesn’t change quickly, maybe it could be used as a rough measure of duration? Or maybe I’m not on the same page when you talk of the lifetime of contracts.

We’re peeking into another wormy can here I think! :smile: The beauty of this proposal is the simplicity IMO. If we start requiring a minimum number of connections just to support the counter, what do we do about nodes unable to attain that number? Is the counter important enough to kick off a node which can’t acquire that required number? Or do we try and adjust the algorithm (inevitably making it more complex) to add weights to nodes’ reported counters based on their connectivity? I’m not saying something like this might not be needed, but I’d resist any such tweaks if at all possible.

1 Like

I agree that the simplicity ( and maintaining it as such) is the key neat thing about this. Over time, comparing differences of different nodes across the network and world against the existing clock would tell the general correlation between the two, but that almost doesn’t matter.

Having a network-wide heartbeat–even if experienced a bit differently at different points in the network body–could become a whole timeframe of its own, regardless of external time.

I suspect, though, that there will be a pretty fair consistency between the two. I think that part of the “duration” aspect is "how accurate does it need to be.

In Bitcoin it is number of blocks, though that does not correlate to time except roughly, but it’s used.

1 Like

That’s quite reasonable. Though, I can imagine a situation where the exact lifetime of a contract is not as important as that it should last for a “long enough” time, which in turn could be measured using SafeTime. Think of a case where the contract would be expected to be regularly renewed before expiry, or left expired once it’s no longer necessary. Nevertheless, I must agree this does limit the usefulness of SafeTime for contracts.

I mostly just meant that if a node suddenly needs a higher precision than what it perceives it currently has (for example, there’s too much variance among the values it receives from its peers), then it could seek out more connections.

Can I ask you about something I mentioned before? I assumed clients receive messages through more than one nodes. You wrote currently it’s only one but it’s expected to change. What I curious about is if my assumption about the encapsulated messages holds.

Would it be possible that the proxied messages would contain the SafeTime values from the original senders or a few steps back from the final node, or is this inconsistent with how proxied messages work?

1 Like

Definitely possible, but I was thinking the more likely scenario would be that clients no longer connect to just a single proxy, but rather enough nodes to ensure that it hasn’t connected to a majority of malicious ones. That way, the counter from each proxy would be used by the client and the median should yield a reasonable value.

If we really want to persist each section’s counter as the client’s response iterates across the network towards it, that would be more complex, but probably still do-able.

The crux is that the counter is just another thing that a single proxy could lie to the client about. We already have a need (and some outline plans) to address the status quo, so I’d expect the client being given a false value of the counter by a single proxy would naturally also be fixed by whatever solution we ultimately choose.

2 Likes

I see. That was my original idea, so I won’t complain. Nevertheless, the idea that proxied messages could contain SafeTime from a much larger pool of sources sounded rather acceptable.

2 Likes

What’s the definition of “neighbouring” sections?

  • is it really just the logical next/previous xor section? This would be really inefficient (a request would need to go the whole xor space without being able to skip sections) and render this proposal less useful as @neo said (the “SafeTime” would be more local)
  • are there “short cuts” to other sections “into” the xor-space? this would imply that there are more than just 2 neighbours.
1 Like

And my issue was the request in the OP for it to be used for contracts as that has some definite requirements and a bag of worms for this idea. What it evolved into though might be an interesting idea to examine. As a network heartbeat pulsing throughout the network or similar.

Probably for this purpose a neighbour is one that is close to the prefix of the section.

2 Likes