Will it be possible to do what Factom does on Safenet?

Thanks for the reply.

I’ve added a few links to help.

nope - afaik SAFE is purely event-driven - no concept of time does exist on the network :slight_smile:
[for node aging number of relocations is counted … but for data i don’t think anything like this is in place]

1 Like

Yes, the concept is that there aren’t any other documents that give that same hash. If there was another unique document that gave the same hash, that is called a collision, and that would be bad.

SAFE has an immutable data type that can be used to store a document. As others said, by storing something that could only be known at that time, you could basically timestamp the document. An easy example would be including the current Bitcoin block number and block hash along with the document (actually, probably a 6 confirmation old block).

2 Likes

It doesn’t have a concept of time (as in date and time) but it does understand sequences. This is by design, so as not to rely on time as far as I understand it. Much like TCP/IP, you don’t want to rely on constructs like time.

Of course, time could be represented at the application layer in some way, much like the NTP protocol/service does.

3 Likes

Ok, so that sounds like what Factom is kind of doing as they anchor into the bitcoin blockchain. I suppose something like Factom could be created on top of Safenet, with like you say anchoring other protocols like bitcoin to do somethings that are maybe not possible on Safenet.

So in other words Safenet itself can’t do what Factom does, but a system / application like Factom could be built on top of Safenet that is maybe equally as good or maybe better (speed and scalability in mind)?

hmhmmm with immutable data types the address on safe is the data hash and therefore except from the time part safe does pretty much exactly what factom does then by default :thinking: - securing your data and making sure nobody can alter it

the time thing indeed would need to be built on top of safe … but

if you would use the chaining methodology seneca uses for his coin - inserting references to data and transfering ownership to that data to the network … e.g. you could make “one block per second/minute” … nobody could insert just references to immutable data without knowing the data itself (because the hasing is a one-way-street) and because you transfer ownership away from you nobody can change the data afterwards … so there you’d have kind of a time-lock …

of course people would only know that you really create those blocks at the times you said if they track you and once the blocks exist nobody would see when they first appeared and need to trust you to some degree …(but thats basically the same as with blockchains … you can’t be really sure the blocks that were there the moment you joined the bitcoin world were really created every 10 minutes … i wouldn’t see where this proof could come from)

1 Like

The other solution is that SAFE can host a blockchain full of signatures; so, you would know ‘when’ relative to other signatures. Blockchains still are more about ownership and the sequence of change, than time but I expect it’ll become a simpler problem because of the need to link into the real world for ownership of objects and that a sum of many time services would provide reliable consensus. What’s needed that is more complex is for Government to prompt Law to understand and acknowledge these new technologies… then alsorts of real world contracts can move forward.

1 Like

Those who maintain atomic clocks etc could write time stamps to the network, along with a unique hash. Others could use this to associate data with a date/time.

If the clock owners keep their salt private and/or rotating, future hashes could not be predicted. Hence, others could only declare something occurred in the present or past, but not the future.

This could be extended to use a decentralised consensus of time and it need not be perfect to begin with. Anyone could post time if people are happy to trust them.

1 Like

Safenetwork can store a document but not time stamped it and Factom can do the contrary. So a solution is a dual one: Factom (to time stamp the document hash) + Safenetwork (to store the document).

All these cases can only prove that a document was issued after a certain date and generally we need a proof that it was issued before a certain date.

A time stamp is not something that is created independently and that can be reused at will for any documents. It must created with the document and both must be validated together by the network (an app is not enough).

A long time ago I proposed a RFC to implement timestaping in the SAFE Network and there is a long topic discussing it.

6 Likes

Getting time has always been a consensus situation, ever since they decided to standardise time beyond using the sun as the source of time. And today they use a number of very accurate time pieces (in each location) to get a standard time for that location. Then they get consensus between locations. Now you can get time from GPS thanks to this consensus mechanism and be confident it is based on some standard time.

The internet has time servers which all “good” servers and PCs now use as their source of standard time and again these use consensus to validate their time and I suppose there is some mechanism that somebody checks each ntp server, even if as a hobby.

I’d suggest that we will see some form of “ntp” servers for the SAFE network since time is such an important metric that we humans need. Maybe as the network takes over the world the ntp servers will join the network and become SAFE time servers. Yes a very specific need for a server, the server has the rare source of very accurate time and provides the service.

Now then a way to time stamp is that you send the time server the hash of the document and the time server “signs” it with the current time and the “ntp” server’s ID. This then provides a “proof” that the document existed and was stamped at that time with an agreed time authority. If needed then this could be done with a number of “competing” time authorities.

Due to the cost and complexities of these time servers and their public usage then its possible to have agreement amongst the world that they are considered accurate. Non essential documents may have only one time stamp but anything that is to be legally relied on should have 3 stamps just to prove that no glitches in one server was not used to fake the time.

Of course we could use @tfa’s RFC. Hey @tfa did you consider where the node PCs get their time from? Usually from ntp servers, so in effect your RFC still links back to time servers. Even if the PC’ time is set by a human whose source is a time server via a path of secondary sources (eg radio, gps, etc)

3 Likes

True, you can’t really know the exact time a block was created down to the second, but you can surely nail it down to the day? Which in most cases for documents like birth certificates, house deeds, passports, etc is all you really need. It really is a very useful audit trail for a company than creates many important documents.

1 Like

Thanks for all the great replies. It’s definitely helped me to understand both factom and Safenet a little better :slight_smile:

4 Likes

In my proposal there are not such dedicated time servers, at least not explicitly. I have observed that devices on internet have already an approximate correct date. This is certainly thanks to some ntp servers, but I consider them taken for granted, like IP routers and everything that make internet.

When a user wants to timestamp a document, what is proposed to a group consensus is simply a condition like: do you (= the vaults which are storing the document) agree that the current date is between T - 10mn and T + 10mn. If there isn’t a majority that agrees with this condition because some vaults in the group have a too big divergence with the correct date, then the document won’t be timestamped. If by extraordinary back luck this happens, the user has just to try with a larger interval and/or another ID (so that the document is stored in another group).

This is what I mean when I say the solution doesn’t need a complex time synchronization mechanism in the safe network.

6 Likes

Correct and quite obvious.

My point though is that their (nodes) source of time can still be traced back to these ultra accurate time centres (sources).

Everyone sets their watches or clocks from say the radio who gets their time from another secondary source of accurate time and that secondary source could get its accuracy from another secondary source or even directly from one of these very accurate time centres/servers/

So my point was that even though your RFC is using the node’s internal clocks the time is still traceable back to one of these ultra accurate time centres/servers

And yes while your RFC proposed a non-complex sync mechanism, it still relies at its core on the complex ultra accurate systems, because the node’s PC time is obtained from a ntp server on the traditional web. And I suggest that as SAFE network grows the “ntp” servers (or new ones) will crop up that will sign people’s document hashes to give this time stamp even if your RFC is adopted. Simply because the legal status of these SAFE-nfs servers can be verified.

2 Likes

I think if we look closely, there will be more things on the nodes (or in the connection to them) than just time, that is relying on something more or less centralized.
The full power of the network will still rely on some of these things, as long as they have not been replaced with more decentralised versions.

Naturally we want to minimise the impurities, but it won’t be possible to start with a 100% pure system.

What we have problems with currently is accurate measurement, meaning that it is not trivial and thus not every node can be expected to have it, which leads to them needing to reach out to some specialised resource - the NTP servers.
But it is only right now that we have this problem. Time is something real (what is maybe not always so “real” is our measurement of it), and it is a fundamental part of our societies. To ignore it today simply because of this current lack of technology penetration seems short sighted.

tfa’s proposal is something I see great value in.
I think it is realistic to assume (especially if you believe in the network) that eventually the centralised nature of time measurement will transition to being decentralised.

Humans will need timestamping of data regardless of how perfect or imperfect it is with regards to decentralisation. I think the network should look to that, rather than limiting itself to the current lack in technology penetration.

Because the timestamping has to go along with the persisting of the data, and persisting is what the network does.

3 Likes

Very definitely

Definitely not. I was adding another dimension to issue.

The RFC is not without issues either unfortunately

  • providing time at the section level then enables sloppy core programming which is bad for protocols. People will call for time related events to be built in to the core. (more stateful information to keep)
  • Eventually (maybe as little as six months) we will see these accurate time servers for SAFE. Its just an program running on the machine to do the work. Or maybe even a more cleaver APP that accesses these servers (or ntp servers) to do the signing.
  • the only real use case has been time stamp of documents. Honestly how much is this going to be needed as the system is in its early stages. Your own computer’s time will suffice initially for 99.9% of the time while the network is in its 1st or 2nd year.

So to build in a feature that will have issues (on a conceptual level) which will be overtaken by more accurate/better simple systems is short sighted in itself.

The system can be implemented at the APP level and since we can deterministically prove that a particular APP does not change then once someone develops a time APP that can sign hashes then as long as it is trusted then it can be used to time sign documents etc.

All the needs humans have for time can be implemented with an trusted APP. The sections do not need it.

Now this does not mean I am totally against the idea in the RFC, just that it only solves a short term issue and creates its own problem when implemented in the core code.

1 Like

I don’t get why you need a time stamp for that? An authority would just do the same thing as they are doing in real life, sign it. After all all i need to know is if that document is valid (certified by that authority) or not (you could also sign it by yourself to make sure the authority can’t change it).

The same goes for contracts, all parties sign it digitally. Afterwards you can make sure it’s real by validating all signatures.

Also a time stamp would only guarantee that the document was created before the time stamped date but not the validity of the document, you could also have it changed before the time stamp.

3 Likes

I agree, this seems quite plausible that a global ‘date’ could be determined easily by consensus and assigned to events as they occur to create a useful historical record.

I think we’re talking across each other here regarding a couple of the parts at least.

Time based on a consensus versus time based on centralized authority.

Make use of existing consensus infrastructure or create a new.

Use some other, non-centralized source of time.

They don’t need the data either.
The network is there for the users, not for itself.

Use cases for needing proof of when something happened… unless SAFE somehow lessened it’s ambitions for how important it will be in the world, then it is obvious just by looking at anything we do in society.

Overlay networks with their own consensus logic can be created for anything.
But not only will the wheel be invented again, the consensus logic that the network provides is and will be exceptional.

The nodes storing the data needs to be the ones verifying when it was stored. It is the most intuitive way. Maybe there are other ways, but it all seems very contrived, just to be able to say that we don’t touch time, as if it was something contagious. I think it still needs to be shown how a piece of data could be verified to be have been stored within a time span, without the storing agent being involved in that?

Time is just a measurement, a calculation based on some physical property. The problem with time is not founded in physics or natural law, it is founded in our current technical limitations, which currently makes it something sketchy and unreliable to work with if anything in network was to be based on it.
But it is not to be used by the network (as it is not needed), it is just a calculation output for the users to consume at their discretion.
Only reason the network would do it, is because it would be the best actor for it. I.e. it would produce best results at lowest cost. It is just sane.

Time is even more related to math and physics than economy is, but the network has no problem of getting 'dirty" by being fundamentally involved with dealing with economy - like pay the developer and other quite fuzzy concepts. (What is something worth? Quite an undertaking to measure that “accurately”).

I think it has a lot to do with preferences of the designers, about what the focus and purpose of the network should be.

But again, I am definitely not saying that the network internals should be based on time in any way. I think the fact that it will not, has often been misunderstood as time in any form would be detrimental to conceive of. Those are two very different things.

4 Likes

This is not what the use cases given before of documents (contracts) needing proof of when stored. When I store my file on disk now, I don’t get a time authority to prove that time either. So yes, for storing data, if the time is desired then I agree there is no need for a “authority” to sign time, but that is not what I was talking of when you replied after me and seemed at the time to be a reply to me.

This is why I didn’t dismiss the RFC, but it has issues for contract timing.

But as another posted contract time stamp will be ex-network anyhow.

And the RFC cannot provide this proof. Since such proof has to be accurate to a second, which is often required by some systems, eg IOT which will over take humans.

The lowest cost is simply to use the time supplied by the person storing it. But (SAFE) files don’t have time at this stage anyhow (AFAIK) so there is currently no cost with storing time (no code and no data fields)

So yes if we want all data chunks/objects to have a time stamp then we need to add a field to each and every data chunk/object and get some form of consensus. eg the RFC. But as far as the network is concerned it has to be just another piece of agreed data and not used to make sloppy protocol programming. The only need for time in protocol programming is various forms of timeout for error recovery etc.

tl;dr

I now see where you are coming from and feel that we are not that far different in views. My concern was this “timestamp of contractual or legal documents” using the network protocol.

2 Likes