Proposed RFC about the problem of targeting groups. Quite different from previous RFC, with significant changes and mixing different techniques (POW, the Age concept, relocation,…).
Very interesting RFC.
It sounds like we can make “archive” nodes now. If nodes (vaults) carry AGE which are levels of proven resource… then archive nodes are those beyond a certain age?
Has there been a discussion about using the node’s AGE to affect their farming rate?
If so, this would encourage farmers to create a few strong vaults instead of many weak vaults. The end result could be a more stable network.
Does higher AGE increase my farming rate?
In other words…
Instead of AGE directly affecting farming rate, it indicates the level of trust earned from the Network. The Network gives the node (vault) more responsibilities. For example, storing “more” data chunks which means more farming opportunities.
It’s like giving a new employee 1 hour of work to see how they perform… a probationary period. Current employees are given 8 hours of work. The oldest employees become managers and tend to work more than 8 hours. So while the pay per hour (farming rate) is the same, the oldest employees earn more because they work more hours.
It sounds like we can make “archive” nodes now.
Not now! It’s a discussion / proposal
If have to read it several times again as I know there has gone quite some discussion and thinking in this one. What I miss after reading is the concept of relay_nodes. We need to have nodes that relay data between a groups and a client. It’s not really a “wanted job” as it means routing data without farming. I’m curious how that one got solved.
This is my own theory/speculation.
If we use node “AGE” to affect farming rate then maybe it will be something like this…
Relay Node: AGE == 0
Managed Nodes: AGE >= 1
Archive Nodes: AGE >= X
So all new vaults start as a relay node first and “age” over time based on reliability and resources. Above average nodes could be classified as archive nodes. If we use the Sigmoid curve, then archive nodes are 20% older than the average node.
If the average node is age 10 then archive nodes would be age 12.
I think the relay role could be integrated into every vault. So normally you farm data, but now and then according to need you are asked to tunnel packets.
This could in parallel serve as a bandwidth indicator for your node if your group knows the data amount and time it took for you to report completion. The question is how it will be implemented, is the tunneling based on data to be transfered or time to serve in that role.
dyamanaka’s proposal uses time but i don’t think it could work since tunneling would stop if there was not a steady stream of new farmers.
If the relay role gets some geolocation functionality to optimize ping and probable bandwidth then the integration should perhaps not happen as it could be a security risk for nodes.
It’s now a function every node must do and will be measured continually on. This RFC uses age to provide longevity and long term proof of capability. It is kinda new, but somehting I have long gone on about having to require, if you go back to the Google talk on scalability you will hear me taking about infant nodes growing up and proving themselves (all the rank rank I jabber on about probably too much). So this is very much in that vein and puts us square back on natural design patterns. This is what appeals to me, The relay part was Ok but did not solve enough of the issues simplistically or naturally (too many if conditions, there are still some of those here as well)
Also will mean early adopters with decent machines get to grow with the network, thereby more farming rewards (As the relocate logarithmically with time (well time being churn events)) therefor after a few churn events those nodes will be in a group for increasingly long times. So it also does mean archive does become a natural part of the network as well.
The algorithm for quorum and the penalties for bad behaviour though will probably evolve themselves over time.
Network breaking tweaks and optimizations seem inevitable. Are there any thoughts or plans to create an auto migration mechanism or a way for the data on network to survive a major rewrite of the code if an optimization is discovered that needed such a drastic switch. A routing refactor comes to mind as an example.
This is what data chains provides. Re-publishable data. A much larger issue moving forward is upgrades, but there are lots of posts about that.
It’s no wonder this networks design is so beautiful. As you said, it’s inspired by nature. Impressive is an understatement. You’re about to secure your name in the history books. Because of you they will likely be free of falsehoods and strategic omissions as well. Governments will stop playing these phucking games with their employers thanks to a few thousand lines of code and the unwavering spirits of all who have helped to move this project forward.
Just a query about persistence. For a while there the idea of nodes being non-persistent was being promoted. Has this changed back to the original concept that vaults/nodes gain rank over “time” and the longer you remain on-line the more rank?
Has the security benefits of non-persistent vaults been solved elsewhere?
Just trying to ensure I am giving others correct information, thanks.
Ah, it may not be obvious but these are non persistent as well. On restart the node will have it’s age halved. So it restarts, joins it’s old group with no authority, but to keep a good rank (age) it will have to supply all it’s data to that old group (whoever wants it) and they will then send the Ok to relocate signal with the nodes age halved to it’s new group.
So not persistent but able to supply a persistence to data. If a node does not give up it’s data, it starts again as age 0.
I thought that immediately when a node leaves the network starts a churn to replace its live copies. When it comes back what can it provide of value before being relocated forward?
[edit - yes I should have agreed (sorry), a node leaving does create churn and data is relocated if that node had it]
New nodes will ask for data from existing nodes, a restarting node can provide this and leave the group more settled.
Also mass outage (seriously large) or full network restart then these nodes will need to re-publish data.
Lastly if there has been a data loss for any reason it’s a way for the network to recover that data. This is really helped with data chains though as a single node only needs the data and can prove it’s network valid. Whereas without data chains we always need a quorum to do this, i.e. good data looks gone but several nodes have it, but the network won’t believe them, with data chains this is also solved.
Is there not a concern that the penalty for what could essentially be a reboot is overly harsh to an independent user and may cause centralisation in the long term when people are competing for uptime also?
It sounds like you already have the information within a peer group to statistically analyses a nodes behavior and determine the probability that it will return in an expected time frame? This may slow the requirement to initiate churn if a node is expected to only be off line for 5 minutes or the likes.
Sorry if this is shooting in the dark I haven’t looked into the implementation of this but hopefully its food for thought.
I have been thinking about a similar thing, although from the perspective of the network and how much “unnecessary” churn is forced through periodic relocation. What you suggest is also unnecessary churn if the node is expected to return shortly and the tradeoffs are deemed worthy.
Another possibility would be to not cut the ranking if the node drops for short periods and infrequently, but instead when it happens do churn normally, and re-establish the group with one extra live copy for the data. This would increase bandwidth and resiliency by 25% for that chunk. Since the churn has already happened, why not try and benefit from it instead of just mitigating and then kicking the node forward causing additional churn when it finds it’s new home.
Both are correct, however this seems harsh but prevents a collusion attack.
This is an attack where somebody creates a website, stating their group and offering $XX for other nodes in that group to pass their keys over. It’s possible but I feel very unlikely and very difficult in this case.
So this is the reason for instant relocate on reboot. The penalty also prevents another attack where a node builds age and then restarts quickly to try and target a group to join.
If a node is allowed a grace period to not miss rank and stay around providing 25% extra BW for the chunks he has (with no quorum voting rights until next relocation) it should not add an attack vector, but still prevent the churn of rank halving relocation. Or am i missing something?
This is a current area of consideration here and a nice idea. The two parts of the RFC we are not entirely happy about right now are
- Restart penalty (which you are looking at)
- Quorum calculation (50% of age + 50% of nodes).
1 - The issue is causing lots of churn, age helps here though as younger restarting a lot nodes will not have that much data to contribute (so less churn to worry us). The other issue is as you point out very fast restarts seem harshly penalised. Perhaps if a node can restart before another churn or proportion then it should relocate without such a harsh penalty? Worth digging deeper here. If nodes can stay in their rank longer then there is an almost built in archive node, if these can reboot then we would be there. A consideration is nodes with age in X percentile for instance have no weight but can restart i place?
2 - This seems reasonable in a balanced network, but not certain (nothing is) The age should be very well distributed per group, but the algorithm feels to clumsy really.
Thinking about the restart issue.
Have you thought about rate of restart to help calculate the loss of rank?
- long term node restarts once every month, so it losses very little on restart
- long term node restarts every week then it loses twice the monthly restart
- newish node restarts every hour, it loses 50%
So established nodes can still restart regularly and only lose a few % each restart, but young nodes or ones that restart often lose a large percentage. Judge “time” by the amount of benefit they are to the network between restarts.