That is what I was trying to achieve a non-time method of aging the particular copy of a chunk so that the network can redirect its replacement to a archive node when its not accessed for quite a while. That count can be tuned so that both archive nodes don’t get too many chunks and farmers are not carrying too much stuff that isn’t accessed. Eventually the 8 copies will end up in archive nodes, but if in the meantime that chunk is accessed then those particular copies of that chunk not in archive nodes will remain in farmer nodes for some time.
But as you pointed out there is no count of accesses, no record of who or what is accessing that chunk, just IF that particular copy of a chunk is accessed at all and how many times it has moved between vaults since the last access. Its purely a meta value that is unrelated to a ID or person and only to the particular copy of the chunk itself.
This is not trying to send all copies of a chunk to separate archives at once, but is more of a migration of stale chunks to archives.
I’d say that when archive nodes are specified that something similar will be implemented so that archive nodes are filled with staler data and not fresh data. Hopefully the write once data cubes being prototyped now will be available by that time. (hundreds of TBs from memory and inexpensive to run) With things like this archive nodes can exist cheaply and with so much data stored each of these archive nodes should make plenty of coin even though each chunk is accessed rarely. So maybe 1% of the GETs per TB that a normal farmer is asked to retrieve, but with 1000 times the storage and *never* turned off then it could be earning 10 times what an ordinary farmer gets which should cover the expense of a couple of these huge archive storage devices.
Oh dear just had the image from the original Planet of the Apes where there were these terminals that stored mankinds knowledge