Yes but not to any significant degree if the sections are relatively healthy (1 chunk in 10, 1 chunk in 1E10 or 1 in 1E100, it’s up to you). When they start to become unhealthy you ease into the indirection rather than everything going off at once, analogous to slow yield and expansion vs. an explosive fracture.
The general idea was to smooth out the response in the vicinity of your conditional criteria, so I apologize if I did a poor job of explaining earlier. A simple true or false about some criteria is the same as an analytic heaviside step. Instead, use of a smooth sigmoidal/heaviside function allows you to use a single scalar parameter to modify the behavior as you see fit. With the smooth approach, one can easily test how a more constant level of indirection vs. your minimalist emergency indirection, and everything in between, actually work out in practice. I am biased by mechanical analogy, but I think a smooth response regardless of the width of the region you end up choosing about the conditional would be beneficial. Hypothetically, if a section is approaching an unhealthy state / storage level under any condition, directed attack or not, the idea is that the “pressure” will start to leak off in a non-linear fashion to other sections and may buy some time for merges or other options. The indirection probability carried by incoming chunks naturally gives an indication about neighboring section health too.
If a variety of sections are unhealthy then at least the nodes start to get a bit of advanced warning sign before the flood gates are opened. This may allow you to make other better informed control decisions. How leaky you decide to set the default, and what the critical health level would be, is up for the devs to determine. Maybe higher levels/probabilities of indirection in particular circumstances improve obfuscation, or have other benefits, with a negligible performance hit due to caching or other optimizations. I don’t know you tell me, it’s just a brainstorming suggestion.
It also seems like indirection latency may be less of a performance hit than we might think, since no one really cares about a 2x or 4x latency hit in the case of data that is stored once and never looked at again, or once every X number of years. Caching handles the very popular data naturally minute by minute. It’s just us folks in the middle who get penalized by the bad actors. Qi_ma’s idea of reversing the indirection when sections become very healthy would require additional work but it is an rather interesting idea to help out those of us in the middle. Seems like you could/would recover any short term performance loss over time.
Perhaps an easy way to do it would be to have sections with health metrics above a certain level periodically look through their indirection maps and ask for a copy of the chunk to be transferred back. Maybe this could be made more intelligent by comparing their cache to their indirection map, and any hits would have the node request for the chunk to be transferred back. The originating vault could then delete its copy or just mark it as sacrificial, etc… (Are sacrificial chunks still in style?) I suppose you could make this procedure with respect to a “super healthy section” conditional also follow a smooth probabilistic transition too.
In the example image, I’m referring to some “absolute” measure or health criteria (free storage space, etc.) minus whatever hard constraint you decide as being the start of critical operation. The probability that a chunk stays in place (which I used in the toy pretend pseudo-code example) is one minus probability of indirection. The different colored curves are the result of changing the smoothness parameter.