There’s a secret repository showing up in one of the dev’s GitHub account. It’s called “Data_Chain” and reads like something very big and new for SAFE. But sssssshhhht !! These are secret links, that don’t deserve their own topics yet as long as they don’t show up in the the main github/Maidsafe. But still a great read though ;-).
In short, this looks like a way of providing persistent data on nodes (managing clashes between different age sources). This must be what David was talking about with data persisting between test nets.
Whisper it quietly… time stamping of data is also mentioned through group consensus!
“These more reliable nodes have a vote weight of 2 within a group
and it would therefore require a minimum of 3 groups of archive nodes to collude against the
network. It is important to note that each group is chosen at random by the network.”
This is all being done in parallel and will mean we can have data proven to be network guaranteed. So will potentially solve many issues
Archive nodes (the nodes with long lives and good resources) - something we were looking at post launch [check past forum threads]
Recovery of worldwide outage of the network
Facilitate ledger based SD (I need to RFC this, but basically you should be able to add a ledger flag to any SD and it’s stored forever). Imagine you want a receipt for a safecoin transaction, then you have it, or comment history etc. Or even you want a ledger based currency for businesses etc. …
Linked chains, those links in chains across chains, allowing graph analysis of the network over time (without time, just entropy).
… I don’ think this even scratches the surface, but will make offline data very secure and the network much more able to maintain very high levels of integrity at the very least.
There are some limitations for now, like having to present the chain to a group that can attest to the last signers of a link having been known to the network. I suspect a few more areas like this, but we will hopefully be able to document these limitations. The one I mention is recoverable though with a slight change to node startup.
But, this is a side project/play area for me (that Viv and Andreas have poked at already and will several more times to find fault) that I hope to complete in next few days, then present it again. It may fail so in the spirit of openness it’s happening in the wild. I think it can easily show a fully decentralised blockchain type device for many different chains of data that also interlink easily (so sidechain type functionality) and a little bit more. I suspect the wider community will “get this” and it may help folk understand the decentralised approach to cryptographically secured data of any type.
After several reads I still try to figure out the solution to this problem:
Vaults don’t store chunks as they were uploaded (PUT) by a Node. The data_managers obfuscate the chunks before they send them to a Vault.
So which hashes appear in the data_chain? The hashes of the obfuscated data? Or the hashes of the chunks? And let’s say there’s a worldwide power outage. The non-persistent data is gone completely. So we have to rely on archive nodes. And as the archive nodes try to reconnect, they need to get their old addresses in XOR. Connect to the old group of data_managers and after that there needs to be a reconnect between the obfuscated chunks (in Vaults )and the real chunks (constructed back by the data_managers). Which persona’s are responsible for that? Data_managers only?
I would caution in the use of your words. Satoshi really did create something immensely useful. Maidsafe is as of now still totally unproven. Everyone on this forum are speculators of the most advanced kind.
Yes this part is easy actually. Any group will accept and old address they have previously seen if it holds a data chain longer than any three existing members. Of course they need to hold the data as well so are challenged from existing archive nodes. Challenge is simple. Take 1000 random data elements and prepend with a random value, have the new node tell us the new hash values and there we are, no transfer of data required.
Archive nodes will hold data from the beginning of time. One of the nice things about data chains is that only the last link need to be known to majority of current group. From there all data from anywhere in the network can be republished. So an data chain will start form a very wide address space (like a massive group) and the top of the chain will be data only in this group. Some of the groups data can also be found deeper in the chain if historically it was put on early.
[Edit - so you can think of chains as cryptographic proof of data over time (entropy) and not related to xor at all. It’s a huge list of data that all goes back to a “genesis” block if that makes sense.]
Is this new data chain capacity, if it works out, expected to introduce any new form of latency to its own function or the network as a whole?
SAFE in concept just seems to get better and better all the time. With this and the coin, if it can be pulled off it seems to have swallowed the functionality of the bitcoin vision but possible improved on it quite a bit.
It should have almost zero impact on latency. It will, however, allow different “shape” nodes to exist. So capable nodes will fight to store as much data as possible to become archive nodes. Smaller nodes with less capability will try and get what they can for the small period they connect. Very small nodes may only stick with session data and not try and get any historical data at all.
So this helps with things like imbalanced node capabilities (asymmetric broadband, small disk space, low cpu etc.) where as long as a node can do the routing/network messages it can provide value and help consensus, but hold very little data, just enough for the odd reward.
I foresee a problem. Well, maybe, in this semi-fantastical scenario:
The reason (or a reason) why the Bitcoin blockchain is resistant to double-spending is that an attacker would have to recreate the blockchain back to its beginning, which is computationally infeasible.
In the event that SAFEnet collapses, there won’t be an existing chain that is linked to all the data that was on the network before the collapse, but a lot of fragments that are being pieced back together. So a well-equipped adversary might have a large number of computers ready to go in the aftermath of a collapse, and proceed to construct plausible chains with fictitious information, claiming to be the real SAFEnet.
Except that the last link has to be existing known close group members That’s not forgeable and therefore a chain that ends up there is not forgeable either. This is a key point. So in total collapse all nodes attempt to restart to last known network (they all know their existing close group and their own key pairs). Then total collapse is recoverable as well. As long as we can detect network collapse per node (we can easily, the group width will be huge on startup, thnk of close_group distance as difficulty).