Each section is processing its own events at the rate required for the events it has to do. Section #1234 might be completing 100 events per second but section #1235 might be completing 20 events per second and section #1236 might be completing 5 events per second at a particular point in time.
So there is no correlation between sections and their internal events.
While nodes in a section can be talking to other nodes (hopping chunks etc) there is no way to correlate event “counters” nor is it desirable as once you have 100 sections then there is 100*99/2 (4950) communications between sections to try and correlate event “counts” As one pair correlate they change between their neighbours. You get the idea, the network would be weighed down trying to have a global event “count” because it would be out of sync before you get a few neighbours away.
The solution to that would be a centralisation of event “counters”, but then that centralisation of a function, then there would be calls for centralisation of other functions. Even one function centralised means you do not truly have a autonomous decentralised network. You would have a autonomous partly centralised/decentralised network, with a significant amount of traffic to attempt to have a central event “counter”
The reason for mentioning NTP servers is not to bridge between SAFE and them, but rather to have dedicated time authorities on the network. This is how the current internet solved the time sync issue and is a reasonable way, since it is how all time pieces are synced in the non-internet world. There are time “authorities” that keep super accurate time and sync with each other.
This syncing of the time “authorities” is a complex task and requires there to be a minimal number of them, and the sync is not a internet dependent operation (even though they may use it as a transport layer now-a-days). Similar to the reason why a “centralised” event “counter” on SAFE would fail since there are too many sections to sync.
Thus my suggestion of the time “authorities” having time authoritative machines on the SAFE network is to provide for those people who cannot for whatever reason decide on a timestamp for themselves. The idea is that the piece of data that needs securing is hashed and that hash is sent by secure messaging to the time machine to be timestamped with the ID of the authoritative time machine and returned by secure messaging. Obviously there has to be a public declaration of these authoritative time machines so that any time stamp can be reliably authenticated.
This removes the need for a global event counter and provides for time to be included in the temporal stamping of data. Linux event counters are often the time (secs since epoc)
How that is paid for is to be determined but I can already think of a few and that would allow the authorities to recoup some of the costs where it current is not being done.
Well I am saying that I cannot see how it can be done in a economical (network traffic/coding) way. And why I offered an alternative that actually provides the required functionality and will actually be implemented anyhow if SAFE takes off. Like GPS helps anyone even though its a defence paid for system, NTP servers are paid for by governments, yet everyone can use them. And if SAFE takes off then by necessity the governments will implement a SAFE network version of time authorities.
You seemed to have grossly misread what I wrote, or I am really bad at explaining things.
Time is a very good counter. Linux uses it everywhere a “event counter” is needed. It has served the “event counter” requirements of the computing world since it because available in UNIX many decades ago.
Use it as just an event counter or as a actual time stamp, the choice is up to the developer. It is a multi-purpose solution
Here we go, the word “contracts”. Wow contracts has multiple meanings.
I can buy/sell whatever on the SAFE network, including coins, and that may need an absolute time or it could only require knowing which came first.
So when you say contracts then you are in fact including timestamping. But you then later clarify it as only know which came first.
I hope the above summary analysis helped for you to understand why a centralised event “counter” is not practical in the SAFE network, especially when sections may number in their 1000’s or 100,000’s and still have the “counter” decentralised.
As soon as there is no “feedback” you then are describing a system that progresses from a point to all other points in the system. By definition a centralised system, and the central point is where the progression starts from. Basic control systems theory.
You want a “counter” but call it time. I understand this, but does not help. BTW the linux time is also a event “counter”
The problem with this equation is that for an Application running on a PC does not have any event counter. That only resides for each section. The PC is not connected to “a section”. It depends on what data the PC is trying to transfer to or from the network that determines the section contacted.
Now lets assume that the Application can access a section by using a XOR address it determines from a standard piece of data.
- That section might have an event time value of 1000000
- the neighbouring section might have 1234567890123456 as its event count
- another neighbouring section might have 999999999000 as its event count
- and so on
So you get a value derived from 3 or 100 peers and its a huge number
But then as the network merges sections and splits sections the other Application that you are trying to compare event counts with might (statistically possible) get mostly new sections split off from others
So then XOR address could yield a newly split off section now looking after that piece of data and the counter calc could be using
- first section has event counter of 1000
- the peers could have their event counters as
- 123456
- 99221
- 987654
- and may more
So then the other Application comes up with a really small event count. But this was done maybe days or months or years later.
Which was first.
How can you correlate these?
Its not a question of being a great idea, but it is fundamentally flawed in a completely decentralised network made up of a really huge number of sections.