Perpetual Auction Currency

In current proposal it’s included in every GET response. So, the more traffic you have to your data, the faster (possible) update rate you have to your bids.
This is an interesting aspect actually. Not yet sure if wanted or not. But it does uncover a relationship between the work as a vault, and (part of) the ramifications of bidding.

The mechanism you describe would work though. They get rewarded for going lower in price - if that brings them closer to NB (which it does in this case, since the others are supposedly bidding higher than NB, and lower means anywhere between that and NB as well as below NB down to, but not including, same distance).

2 Likes

Ultimately, the farmer’s decide in the ‘take it or leave’ model. The network lowers its buy order price for resources until too many farmers halt operation and take their resources offline. That is the lowest possible cost the network can attain without self destructing. Some farmers will stay online no matter what the resource buy price is, which has a stabilizing effect.

Yes, because farmers could collude and demand a much higher resource ask price than necessary. This would hurt network growth and long term survival compared to a scenario where you had not given them this power. A farmer’s greed has no limit; the network’s greed can be programmed and known a priori.

In a hybrid scenario, the farmers tell the network what their minimum ask price is for their resources. It’s a threat telling the network that they will go offline if the network reduces it’s resource buy price below that amount. This gives the network some advance warning so it can smoothly correct farmer rewards before too many nodes go offline. Farmers could also be pressured to play fair. For example, if a node claims that it will go offline at a certain price but does not, the network could cut it’s age in half as a penalty for lying.

Some translations that might help the thought experiment:

Farmer’s reward = network’s buy price/offer for farmer resources

Farmer’s ask = a farmer’s minimum ask price to continue selling resources for the network’s safecoin.

11 Likes

I’m not sure about this. Could the network greed be done in a way that aligns well with user needs? It’s a clever way to frame it though… ‘network greed’, really neat and helpful perspective.

I like the hybrid idea, really stimulating stuff.

2 Likes

There’s no need to commend.
In fact, the best way to show appreciation for our work I would feel is to lend us a few moments of your time to be open for the idea. It’s perfectly fine, plenty good, to just run a civilised discussion. But I mean, that would be the optimal thing.

Like none of us has decided either or, but we entertain the idea. If you think you can do that for a while, and take into account also the arguments that question the feasibility of collusion, try come up with reasons why that holds or not, and not only stay firm in the stance “it will happen”. Or at least delve a bit deeper into that, as to show why you can say that with certainty.
I can say with certainty that SAFENetwork could be overtaken. But after having discussed the theory, done simulations etc. the assessment is that it will be very expensive. Fully possible, but also very expensive. We don’t dismiss the network because of it. And there are motives to take it over alright.
Same thing applies here, with one difference I would say: we are less familiar with the magnitude of resources required for successful collusion. We know there is a cost, many of us even say it is probably quite high.

When you have decided your stance based on “people could collude”, have you then a clear picture just how high this cost is? I would be very interested to hear your reasoning about it. Because you probably think it is not prohibitively high, and so you think there’s some boundary. Since we say we don’t know, maybe you have a view that could make the picture clearer?

If you could lend your time for such, that would be massively appreciated, and much more desired by us than any commending.

To me the most important thing is that SAFENetwork does not end up with an inferior solution that could risk topple it.
All the things @mav mentioned that he saw during simulations, are things I also saw when digging deep into my simulations.
It has raised concerns for both of us that the storage-as-a-proxy system is very sensitive to future changes.

If you could for a moment assume there might be something to that assesment, how would you then feel about delving a bit deeper, play with us for a bit, and try view this collusion phenomenon all arguments considered?

If you manage to show us that the cost is not high enough, that would be a great help. If you manage to show that the cost maybe is high enough, well, maybe that would be good for you as well?

Edit: If you didn’t perceive those arguments against the feasibility of collusion, hold on and let me gather them. There were a couple from various persons here.

6 Likes

To me, this is the reason FOR bidding - people don’t know or understand fully the motivations of others and when a pressure is applied, a counterforce appears. The same isn’t true for programmed operations - in fact that is why they can be gamed and manipulated, requiring continuous reworking.

Opening the network up to market operations, if done well, takes the burden off the programmers and the network.

It’s the doing well part that is the tricky bit - but these problems are tricky all around, no matter how you slice them.

3 Likes

I think one of our fundamental arguments is that it isn’t actually known a priori. It might be for the situation we envision, but not for the ones that come. The variables are too many to be captured by an algorithm.

And I think this is one of the things argued with these formulations:

and

and

and

To all of this I would like to add that the algorithms are the result of a few people’s understanding during a very limited timespan (when they designed it). While bidding taps the understanding of many people, perpetually.

3 Likes

Thanks again for the responses. I value the discussion and take the point @mav about both approaches being complex in their different ways.

The discussion and your comments about simulating the storage algorithm lead me to feel that while valuable there is going to be a limit to what we can learn from both discussion and simulation, and that we should be ready to test any idea hard, and ideally more than one.

WRT the network doing A/B testing, I think that is, and I would like it to be one of our goals, so that the network be capable of running these ‘tests’ lives at some point. (Not A/B testing necessarily, but adaptive in some way). I think that is something that is being considered, but we’ll have to make do without for some time yet.

I liked the earlier quote from Tesla about thinking clearly and doing experiments. Simulation is a form of that, and the test networks will take that forward. I guess though with radically different models there is a lot of work in creating the test and running it, so perhaps that’s a barrier, and why we tend to focus on less labour intensive, less costly methods.

5 Likes

Just get a little meta, I propose this conversation itself is an example of how the idea of collusion interacts with human behavior. It would not really cost anything at all to just all agree. Yet we don’t default to that just to be part of the in-crowd (at least not everyone all the time.) Some people say wait a sec I have a counterpoint and maybe there is some profit in exploring that.

In fact I don’t think science or philosophy (or maybe even art) would be able to advance if we had a tendency towards 100% collusion. There’s always gonna be someone saying ok cut off my head but I still think the earth is round not flat.

5 Likes

Let’s compare to bitcoin.

Bitcoin has the @jlpell idea of ‘network greed’. The way bitcoin is greedy is:

  • Only pay 50 coins every 10 minutes for ‘doing work’.
  • Half that amount every 4 years (leading to max 21M coins).
  • Every 2 weeks adjust the work required to keep the payment rate fairly steady.

We don’t really simulate the bitcoin rewards, we extrapolate through time. It’s predictable and fixed. The adjustments to a fixed rate mean behaviour can change but the economy stays pretty-much as designed through time.

The behaviour happening behind the scenes can be very complex, basically impossible to simulate, but bitcoin still works. I still find that pretty amazing. For example the transaction fee market in bitcoin, that’s really crazy but somehow it seems to work.

For me there’s a mild flaw in part of the thinking which caused @oetyng and myself to develop the bidding idea: ‘storage-based-rewards is hard to simulate so it should be changed’. Bitcoin shows things that are hard to simulate can still work ‘just fine’. I’m not arguing against the bidding idea, just accepting that storage based rewards might be ‘just fine’. We don’t/can’t know. It would be silly to change from one hard-to-simulate thing (storage) to another hard-to-simulate-thing (bidding). So why am I interested in bidding even though it adds no improvement to the simulations?

If we look at the proposed ‘network greed’ in RFC-0057, I think this is what it looks like:

  • 2^32 maximum coins
  • always have spare storage (no more than 50% of nodes will be full)
  • distribute responsibility (there will be between 100-400 nodes in each section)

The big difference is we can’t easily extrapolate rewards (or network greed) through time.

Part of why it’s difficult to ‘do bitcoin style greed’ is because SAFE doesn’t know about time. If we try substituting ‘events’ for ‘time’ we are really substituting ‘behaviour’ (since events are triggered by behaviours) and we’re back to unknowns.

Could we design storage-based rewards to be more like bitcoin, more greedy, more predictable, clearer promises?

I think the work on network health parameters is a fairly good start toward a useful understanding of network greed. Payments / rewards should be based on a network that’s ‘greedy for health’. We decide beforehand what parameters would count as being unhealthy, and then design a reward algorithm to push behaviour away from those boundaries. It’s definitely possible.

I think it’s great to avoid using time and using hard boundaries in the reward mechanism. However by doing that we probably must accept that behaviour is going to be the driving force of the network.

There’s an assumption by everyone here (probably correctly) that greed will be a dominant behaviour.

Bitcoin does not have a way to change the reward amount (the ‘obvious’ greed). It only has the ‘subtle’ greed which is exclusion-through-efficiency. We’ve already seen that having no counterforce for this type of greed has caused bitcoin some issues. Participation in mining is not as distributed as perhaps many people would like to see, and that poses security and existential threats to the bitcoin network. The underlying behaviour ‘ensuring’ distributed participation is obscured.

When using storage as a way to determine reward both types of greed are available. Users have a say in both the amount of reward and the amount of exclusion. And they get to do it in a way that one type of greed generates more of the other type.

In contrast, the bidding mechanism allows different types of greed to balance each other, rather than reinforce each other.

So… does this sound reasonable? Am I accurately representing things? Is the bitcoin comparison useful? Does bidding have a benefit that’s not possible using resource-as-a-proxy?

14 Likes

What a cool idea! Just catching on to this one.

3 Likes

Not all farmers are going to be farming merely for the sake of earning coin (obviously) … even on the larger scale of farming, longer term especially, many farmers will be farming for the sake of adding stability to the network for the purpose of securing their own data.

Given that, I think we can understand that there are going to be forces within the farming community that will seek to moderate or even drive down prices.

3 Likes

Yes.

Yes, if it is used to warn the network when nodes are likely to give up and go offline, and only if resource-as-a-proxy has the final say on reward price.

I think a missing perspective in this discussion is the lack of consideration for the network as a self-interested and self-governing market player. (Sorry if the following sounds like a broken record…) In the original farming algorithm, the network was in control of the market, and users are subservient to the whims and demands of the network. The network offers a buy price in the perpetual storage resource market and the farmers can accept that price or leave. The network has serious duties it is trying to perform: ie. monitoring GET rates, PUT rates, and available supply or resources in order to keep everyone’s data safe and secure. In this regard it is a trusted authority on what storage costs it can afford to pay the farmers in order to sustainably grow the network at any given moment (based on its PUT income and savings account / trust fund balance). Nodes/storage leaving the network is the ultimate truth on what the minimum sustainable GET price can be.

The PA system you proposed offers the other extreme case where the network is subservient to the farmers. Farmer’s offer an ask price in the perpetual storage resource market, and the network is forced to meet their median/average demands. The farmer’s know nothing and care nothing about PUT or GET rates and how much safecoin is left in a section, or anything else about what is happening with regard to the network’s “health parameters”. In this scenario, the farmer’s are empowered to fulfill their self interests, which may or may not align with optimal growth and survival of the network. The farmer’s have the power to drain the network dry and there is no way for the network to protect itself other than raising PUT costs, which would lead to a lower than optimal network growth rate.

These two extremes are edge cases of a more moderate free-market interaction between the network and the farmers. Consider a perpetual storage resource market where farmer’s present an asking price for their resources, and the network presents a buy offer based on its health/growth metrics. When these two prices align, storage is made available to the network and farmers receive safecoin. The problem with this scenario is that situations could arise where the network and the farmer’s can’t agree on a price and so “market volume” drops to zero; meaning no get rewards and no chunks delivered. This is an infeasible and intolerable scenario for network operation.

It is difficult to talk about either scenario without also talking about PUT costs. In all cases, the network would need to resort to increased PUT costs if there was too great an exodus of nodes from the network, or if the farmer ask price was too high (and the network savings account was too low). The difference is that node exodus is ultimate truth on how far the network can push the farmers down in price, whereas farmer ask prices are always going to be higher. Anything greater than the minimum possible GET price, will require a greater than minimum possible PUT price, which results in non-optimal network growth.

8 Likes

This was and I still feel an important part of an autonomous network. Yes bidding could still be considered a part of an autonomous, but is yet another sizable step away from being autonomous since the network is no longer in control of what it deems a suitable price.

As you say the farmers are now controlling the network and telling the network the reward amount and there is definitely an avenue for farmers to then manipulate the price. This is inherent to control systems and while it can be made difficult to manipulate there are always ways to get around it. The issue is how long to do so.

When the pricing is based on network needs then the network can control the behaviour of the farmers as well Whereas bidding limits this ability or in worse case makes it ineffectual.

6 Likes

Yeah I can see how this is a workable solution, and agree the network should be ‘in control’ rather than the bidders/farmers.

How do you see the departure rate being expressed by the network? Should it be a fixed target, maybe aim to have 1 vault depart per hour one depart for every two that join? As in, how do you think the network would actually make the decision to raise or lower rewards? I like that it’s based on departure rate (which is informed by farmer bidding), have you thought of any particulars around the decision algorithm? Just really curious about the idea, looking for more info if you’ve got it :slight_smile:

1 Like

Another simple thing to consider is network growth rate. One way to structure the GET reward and PUT cost control algorithms is to specify a targeted network growth rate from the beginning. This desired growth rate is analogous to network greed. GET and PUT prices are then manipulated via a control algorithm (ex. a simple PID controller) or non-linear optimization algorithm (ex. genetic algorithm, neural net, etc.) to attain the targeted growth rate specified at network genesis (ie. “be fruitful and multiply”).

What growth rate is optimal? Pick your poison. Linear growth, exponential growth, geometric growth etc.
I think the most natural choice would be to follow a fibonacci series with regard to resource capacity and establish different metrics for storage, bandwidth, compute, etc. We want the network to grow quickly, and natural evolution has shown fibonacci growth to be optimal in many situations.

Ex: Targeted growth of {node counts, used vault storage, elder counts, GET rates, PUT rates} per {X,Y,Z,R,S} parsec consensus events.
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, …

You described the previous method as “storage-as-a-proxy” but it doesn’t need to be a proxy; it could just be “storage-as-storage”. The reward rates can be independent. When the network launches, perhaps only storage is accounted for and rewarded/purchased by the network. Later, a market for bandwidth is brought online with payments to farmers according to different metrics. Later again, compute and other capacities are brought online and rewarded independently.

EDIT: Another aspect of network greed is the equilibrium setpoint for how much safecoin it is willing to let circulate among the humans at any one time. IMO the network should always be seeking to own 50% of the total safecoin supply at equilibrium, but allow for transients from 10% to 90% depending on network conditions.

7 Likes

The autonomy of the network is great to the extent that it has enough information and that it can process that information in a timely manner.

Do we really know if it does or if it can?

Bidding is a form of voting that gives information to the network - perhaps the greed effect can be diluted or ‘aimed’ to provide a win-win.

This is what I don’t like about these sort of pre-determined systems - it’s not allowing the network to adapt to rapid behavioral changes and a valid reason for bidding as a better model.

To build in the level of autonomy that is capable of really coping with human gaming - it needs to be an adaptive AI capable of gaming on the same level … otherwise we need to invite humans into the system through some mechanism or mechanisms to add the adaptive layer.

Of course the basic layer here is that when the system goes out of whack, the coders implement a soft fork - but these things take a lot of time and the network can suffer a lot while waiting.

2 Likes

Missed your post earlier…

The network should never be in a state of contraction with regard to available resources. We always want it to grow (to an upper bound) or maintain it’s current size over some reasonable interval (X parsec consensus events?). We don’t want a contraction, this is especially true now that sections will not be allowed to merge, only split. I see network size as a multi-valued metric consisting of section node counts and section storage space. The two are loosely coupled but equally important. IF section node counts are decreasing, OR available storage space is contracting, THEN the rewards (ie. buy price offered to farmers by the network) should be increased, ELSE decrease the rewards. IIRC the original farming reward rate algorithm spoke to this in some fashion

Farmer’s leaving the network is a harsher but more truthful way to give information to the network.

No one said anything about not letting the network adapt @TylerAbeoJordan. That is what the optimization and control algorithms are for. A pre-determined/targeted growth rate is only the network’s ambition, what it strives for. Its reason to get out of bed in the morning and work to keep our data safe and deal with all the fuss of PUT income and GET costs. It might never be able to attain its goals, but it sure will try. There is something to be said about having unrealistic goals though. Some meta programming might allow the network to reassess the targeted growth rate if it fails to deliver more than X% of the time. Likewise, if the network detects that it is an over-achiever, it could reassess and increase its targeted growth rate.

1 Like

But all it tells you is that they are leaving - it doesn’t tell you why.

For example, a small number of farmers leaving might not seem like a big deal - but if those are the high tier profession farmers then it may be a big deal - taking away a lot of fast/high throughput connections … meanwhile a large majority of home farms keep rumbling along forever and don’t leave until the network stops being viable.

However, I do not seek to argue merely in favor of this particular form of bidding, as put forward in @mav and @oetyng proposal, but will argue in support of anything that gives more information and direction to the network - and against hard-coded solutions that do not allow flexibility. And I believe that is the aim of this proposal overall. So IMO the specifics are what need to be addressed. And perhaps in time we will see if storage, bidding, or a hybrid solution is best.

2 Likes

One of the variations we worked with, was weighting of the NBs before calculating rewards.

For example, weighting by coin scarcity was deemed useful as to actually get a net-farming after launch. Without it, the unfarmed supply would / could stay at 85 %.

The coin scarcity weight would also be the place where the target unfarmed supply is defined.

There are other weights that can be used as well. Coin and storage scarcity are weights that I used in the farming algos in SAFE Economy Explorations as by the various existing farming reward algos.

  • Do you feel that is a sufficient influence of the resource-as-a-proxy method? Do you see some other combination of these?

I would strongly disagree that it is simple.

It has been a very important part of simulations, so much has depended on that alone.

I have always asked myself, what is a desired / healthy growth rate? And every time, I think that it is not possible to say. What makes this specific growth rate desired for all the times and environments we will be facing? What would make us think that there is a specific growth rate that is apt for all times?

This is exactly the problem I have seen. It is one of the hard-coded assumptions on future values, that isn’t really resilient. It may be a good approximation today, or for some periods. There is no evidence supporting that it will generally be so. Rather the opposite evidence exists I would say.

You refer to fibonacci growth. While it is definitely interesting and has its uses, I think it borders to numerology to confide in such a series reflecting a desired growth rate.

I’ll give you some meat and backup for that claim (I don’t like to just state things without a backing reasoning, thus allowing for continuous traversal towards a common understanding).

The examples in the link you provide are many times confided. Flower petals, seed heads, pine cones, tree branches, shells, faces, fingers, the uterus, animal bodies, these are bounded shapes and objects. These have boundaries (probably defined by some gene, and these won’t grow outside of a certain size defined by the current set of available genes – so not talking about evolution here), and within that boundary they exhibit this pattern. Very different from the network and its growth rate. It has no boundary.

The closest relevant analogy comes in the example of tree branches, where this is mentioned:

Root systems and even algae exhibit this pattern.

So, algae is a population. That is a good analogy to the network growth. But the application of the growth rate is flawed, because it leaves out very important things that changes everything.

The algae exhibits the fibonacci growth pattern in a specific environment . Probably they are talking about a petri dish. When you put this algae (how large population to start with?) in a specific environment around the world, you will see very different growth rates . It will depend on so many things that I would have to write pages upon pages here to account for it, and I would still not have it all.

This is because the environment is not a petri dish, each environment have populations and factors – basically in themselves a universe of environments - that interacts with it, and none of these environments will be static over time.

The same goes for SAFENetwork. It is not going to be placed in a petri dish. For that reason, picking a fibonacci series, or any other series, as a good growth rate for it to have, is almost numerology. It certainly will not be a good growth rate for all environments and times where the SAFENetwork will be exposed.

  • So, to connect to my starting statement, I would strongly disagree that the growth rate is a simple thing. Do you think my reasoning supports that statement?

I fully agree that we all, continuously, miss various perspectives. That is why we talk to each other, so it’s really great to see your input in these questions, to actually see your reasoning about them and the display of the fundament of your stance. That’s when it becomes really productive IMO. So thank you for that.

I can see why you want the SAFENetwork to manage itself, be autonomous, when setting farming reward and put cost. It is one of the most brilliant and beautiful and powerful things of the network. It is the very fundament of it, that it is autonomous. Before I perceived any of these problems with previous way of the economy design, I was deeply fascinated by the idea of using pre-designed algorithms to define the economy, and protect it against the corrupted humanity. Deeply fascinated.

I think there are a couple of things that can be helpful to think about at this point:

  1. Is it really true that we don’t allow corrupted humanity to influence the SAFENetwork, just because we pre-define a farming algorithm based on various measurements?
  2. Is it really possible to pre-define an algorithm based on various measurements, that will be resilient in face of the ever changing non-predictable environments it has to be apt for?

Addressing question 1:

It’s true in a way. But I think the corrupt humanity is still influencing the network, even without bidding. Every person interacting with it has some leeway in manipulation, making the network subservient to them: when to connect, when to disconnect, how much resources to provide, which resources to provide, in what way these resources are provided (with regards to latency etc.), how to act when tasked with handling messages or membership changes, etc. etc… All of this is user input. (We have provided a software, but it can easily be rewritten and distributed, i.e. user input is always possible.) All of this is the corrupt humanity influencing the SAFENetwork. The difference is, that we believe we have designed protective measures in form of rules, consensus, malice detection etc. that are sufficient to keep all this corrupt human input in check.

So the difference is not that SAFENetwork before did not have corrupt human input, and that with bidding it would have. The difference is that all the previous input, we believe to have designed solid rules for. Bidding, we do not yet believe we have designed solid rules for.

Some of you say, it is impossible to design such solid rules for it. Mind you, that before SAFENetwork (probably even still) people said the same about the things we actually have done now. The impossible network anyone? Because, how could we hinder the corrupt humans from manipulating it? Well, that we have done now we believe, isn’t it so?

  • Do you think it’s a somewhat accurate description that the difference is, that for the existing human input in SAFENetwork we have solid constraints which makes them safe, and that for bidding we do not yet ?

Adressing question 2:

As I’ve said before, we many times do not know if there is, but have to believe there is a solution, to actually put in enough effort as to find it - if it exists. So, that’s what we’ve been doing, that’s what I’ve been doing, when trying to design a good farming algo 'a la resource-as-a-proxy.

You suggest that we base the algo on one parameter to start with, and continuously expand it when new parameters are available in the network to measure against. So, that is kind of based on this idea: We won’t ever know if the current solution we have actually is perpetually resilient, but we trust our collective perception of the changes to guide us in improving on the algos as it becomes necessary. I.e. from the start not trying to create perpetual resilience, but always deferring that to the future, were we trust us humans to be able to take wise decisions, agree on them, and implement a proper upgrade that will serve us until next time upgrade is necessary.

It is a sort of more sparse bidding system, almost like representative democracy instead of direct democracy, voting on a government that during 4 years will carry out the objectives of the population during which they don’t have much to say, until next vote, where they put in place a new government.

I say that, by this, we haven’t actually changed anything in principle about human input being the guiding source for the network. We just changed a few details in the frequency of sampling the human population, and perhaps in the piping of the samples through a not-yet-defined structure of trust in developer groups who ultimately decides exactly how it’s implemented (all of which makes it completely obscure and hard to reason about).

Also, we have introduced these high risk events, these times where network can be highly out of synch, and its entire existence is depending on the humans to agree on what changes should be implemented, and then that the upgrade is carried out in a good way. Times of turbulence.

(There is some ”science fiction” to come that can be applied later, AI writing the upgrades etc. But we can’t really design for that now. We have to rely on what we can do today. So, that’s unfortunately out of scope for these discussions, and basically has to be considered practically impossible for now.)

Even a bidding system would probably benefit from upgrade at some time. But it seems it would be much less dependent on it.

  • Do you think it’s a somewhat accurate analogy with the direct / representative democracy, and that the difference is in frequency of sampling the input from corrupt human population, more than in the principle of sampling it?

I’m not sure how realistic this is. Initially, it seems to be as it is out of reach for actually basing a proposal on today. Do you have some more concrete vision about this, something we can work with to apply on your idea of a pre-determined growth rate, so that it can be resilient to future changes?


Very nice concept. Do you actually feel it would be a desired approach, if we disregard the no agreement situation?

  • Let’s say we find a solution for when there is no agreement, would you say this is then a good approach – maybe even better than resource-as-a-proxy?

I agree about network needing to resort to increased PUT in all cases we’ve seen so far.

I do think there is a good reason to believe farmer ask prices to always be higher. I don’t think it will necessarily be the case, when considering all the ways rules and incentives can be designed.

This connects back with the motives for bidding exactly at, below or above NB.

I think you are saying that the dominant force will always be what ever motivates bids above NB.

  • Do you think it is practically impossible to give incentives to at-or-below NB bids in such a way that there would, over time, be no unhealthy dominance (imbalance)?

So, let me present an alternative angle to the idea of node exodus being the ultimate truth:

How about all the other reasons for nodes to exodus that are possible?

Large scale technical failures, power outages, cables being cut off by governments, solar flares, etc. etc.

I can think of many many reasons why large number of nodes are disconnecting for other reasons than the price reward being wrong.

I think this pinpoints what we have said to be a problem with storage-as-a-proxy as the sole determinant. It is a proxy , and it is not the ultimate truth for node operators sentiment on the reward. Acting on it as if it was, risks causing very bad end results. It constitutes a constant friction and misinterpretation of actual sentiment that gives unpredictable and unwanted results.

  • What do you think of that reasoning, does it make sense?
4 Likes

This seems very complicated to me. An ever growing complexity. It’s what @mav referred to previously with this:

It becomes very difficult to reason about the outcomes, to modify this when necessary. The complexity and the difficulty to reason about it makes it vulnerable to mistakes, vulnerable to introducing unintended behaviour and consequences.

I think it is an expression of over-belief in the forever applicability of a pre-defined method, of a calculation pattern to be aligned with future. An over belief in our ability to today choose good hard coded values (which are in the foundations of what you suggest here) that will be resilient to future situation changes.

I think we throughout the discussion in this topic, circle around this difference in perception of these abilities of us.

3 Likes