Exploration of a live network economy

(This post was originally in RFC 57 Safecoin revised topic, but since it is not about RFC 57, it gets its own topic).

As you might already know, the temporary farming algorithm proposed for test-networks is not suited for the live network. It is good enough for the MVP and tests and so on, but I’ve always wanted to delve into real attempts at devising an elegant and intelligent rewarding system.

An intelligent rewarding system could perhaps be described like this:

  • Gives the right incentives,
  • Has the right amount of buffering capacity for various events
  • Has an onset of compensatory effects that comes at just the right pace.

Here I start out somewhere, drop the needle on the map, and work from there.
Some things from original Safecoin RFC is brought back in, such as recycling of coins, and some is based on the newer RFC 57.

I think that what we need to do is make the network be able to easily adjust reward R and store cost C as outside markets decide on a fiat value of safecoin. I think that increasing the weight of data stored d improves the responsiveness in this regard, which should make it easier for the network to adapt to such market valuation changes, as it then can better incentivise people to add / remove storage to the network.


Exploration of a live network farming reward

d = data percent stored of available
u = unfarmed coins
s = sections count
m = 1 / s
n = neighbor sections' median vault count [1, 200]
x = 1 / n
p = ds + 1
g = ln(p^3.2)

Farming reward

R = gxmd(d + u + 1)

Store cost

C = 2R

General

The farming reward is paid on GETs.
The PUT payments go directly to the section of the associated data. This is a recovery of farmed coins, into the unfarmed balance. (It is the first part of having the network keep a buffer for balancing dynamics).
This requires a simple and minimal algorithm, for rebalancing between sections. It can be done in many ways, but for example:

Whenever a section cannot afford to reward a GET, it will request of its N/3 neighbors of highest value unfarmed balance, to send 1/4 of their unfarmed balance (where N = neighbor count).

This would supposedly be a relatively rare operation, and have a low impact both at execution and on the general workload.


The farming reward R = gxmd(d + u + 1) can be tweaked in endless ways as to achieve the desired properties, this here is a first draft.
Same goes for C = 2R, which was additionally just an arbitrary value. Most consideration went into devicing R.

Breakdown

Unfarmed coins percent u

The fewer there are left, the lower R should be.

Data stored percent d

The higher the percentage filled, the higher R should be.

Sections s

The more sections there are, the more valuable the network (and thus its currency) is, and the lower R becomes. Due to rules of split and merge (bounded range for section size), number of sections is a good indicator of network size.

Neighbour median node count x

Approximates node count per section. R increases as approximated number of nodes in sections decrease.

Reward R

The reward unit for a rewardable operation. Currently the only such is a GET, and one GET is rewarded with R safecoins.

Store cost C

Every PUT operation will return 2R to the network, and thus increase unfarmed coin u and data stored d.

Motivation

Safecoin value

This reward algo assumes that network size is an indicator of Safecoin value. When few machines are running, we assume there are little economic margin associated with it. Many machines running, would indicate that it is very attractive to run a machine, and thus we assume that it is lucrative, with high margins.
Additionally, we assume that a larger network is an indicator of increased adoption and breakthrough of the technology and evidence of its larger usefulness and value. We don’t need to know in what way, the sheer increase in number indicates that it is more useful in any number of ways, collectively making it more valuable to society, and thus its currency is more valuable.

This metric is essential to include in the farming algorithm as to properly balance the payout with regards to the inherent value of the currency, and also maintain the value of unfarmed coins.(This here is the other part of network keeping a buffer of value for various dynamics with regards to members and performance, instead of depleting and losing that influence).

Median section node count

The variable x acts as stimulant of security; we can by this motivate inflow of new nodes when nodes for some reason are leaving the sections. When sections decrease in size (i.e. x decreases), their security also decreases, therefore it is motivated to inversely correlate the reward R with x, as to stimulate new nodes to join.

Weights of u and d

It seems likely that u would stabilize around some value.

It might be the case that it is a reason to give u higher weight, so that the smaller fluctuations, can still be reflected in the farming rewards (and storage costs).
Another way to look at it would be that since d would probably at times see larger deviations from any stable point than u does, it is an influence on the network that need a larger compensatory capacity, hence a greater weight to d .

After trying both, I for now settled on d.

Store cost

Having a store cost use a constant multiplier of 2, is an arbitrary value for the moment. Much more consideration could be put into the consequences of this, as well as of any other constant or variable used instead.

Results

The behavior of the algorithm

R is less responsive to the variation of u than that of d.
The influence of s is large at very low values, and small at very large values. This translates into a maximum of R = 6.654 safecoins when section count s = 1, node count n = 1, d and u = 1 (node is full and no coin has been farmed).

Day of launch

Let’s see what vaults could perhaps expect to earn at day of launch with this function.
Initial coins ~10% gives u = 0.9
Number of vaults we expect to be roughly number of members on forum which now is 5k.
That gives s = 50, as we set n = 100.
Since it’s early we set d = 0.2.
This gives R = 0.00064 (640 000 nanosafes)

With safecoin at 1$ and 1TB of storage filled to 20% with 1Mb chunks, and perhaps this GET distribution

10% of stored gets 1 GET / day
1 % gets 10 GETs / day
0.1 % gets 100 GETs / day

For 1TB that is 300k GETs per day. With 20% filled, that is 0.2 * 300k = 60k GETs per day.
This is about 38.2 safecoins.
The reason to give such a high reward initially is to maximize early growth of the network, which is key to ensure its secure. If safecoin is at $10 at that time, it means even higher inflow of people starting vaults.

If we say that the first week will see a doubling of vaults every day (perhaps optimistic, but not entirely impossible), going from 5000 to 640000 vaults. That gives, for a 1 TB vault at 20% filled:

Day    Safecoins per day               Total
1             38.2                      38.2
2             24.6                      62.8
3             15                        77.8
4              9                        86.8
5              5.1                      91.9 
6              2.88                     94.78
7              1.62                     96.4 

(Now, in reality, these numbers would be affected by the decreasing u as coins are farmed, and the decreased d as new storage is added, but we simplify and assume constant d. It gives a rough picture.)

Even if a safecoin is at $10 at the time of launch, it is reasonable I think, as it would give an absolutely insane initial growth of new nodes.
First week would then give $964 for a TB of storage provided. (In reality probably less).
This seems like a very good motivator for network growth.

World wide adoption

With 100 billion nodes, R = 1 nanosafe is reached when d = 0.7% and u = 0.5 %.
(Expecting world wide adoption to be closer to 100 billion nodes than 10 billions, when considering IoT etc.)

When we are storing all data in the world, we probably have a much less intensive access pattern to our vault, let’s update it to:

1% of stored gets 1 GET / day
0.1 % gets 10 GETs / day
0.01 % gets 100 GETs / day

For 1TB that is 30k GETs per day. With 70% filled, that is 0.7 * 30k = 21k GETs per day.

@Sotros25 made an estimation of some $4k per safecoin at absolute world dominance (link).
That would give 0.000000001 * 21000 * 4000 = $0.084 per day.

Around 10 cent per day and TB. Not really exciting.

But then on the other hand storage capacity might be much better, so a TB then could be the equivalent of a GB today. That would give $84 per day and GB equivalent.
These are mere speculations. But we are not many orders of magnitude off.
The question is how close we need to be, and what will compensate variations to this? Would it happen already with the way this works, or is something additional (or totally different) needed?

Problems

  • If we let nanosafe be the smallest unit, we would need to go either probabilistic, or accumulative on rewards when at 100 bn nodes. Apprx. 1/10th of the GETs would actually be paid out as 1 nanosafe.
  • Many speculations lead to the desired numbers. What compensatory effects will we see in real life? Will the economy of the farming work out with this, or is something additional/different needed?

Thoughts

  • Can R be too small as the network grows? With a network size of 100 billion nodes with 50% unfarmed coin u remaining and 70% used data storage d we can still reward 1 nanosafe per GET. Would we introduce probabilistic rewarding whenever R < 1 nanosafe?
  • Could we simplify and remove node count, and only use section count? Could the decreasing of rewards as sections split encourage unwanted stalling of a split event, would it be too coarse grained to properly respond to network health?
  • How should store cost relate to farming reward?
21 Likes

based on your system I made an ods file that anyone can play with!!! have fun and give us the updated file with better or new estimates!

https://1drv.ms/x/s!ArEPuNx6dq7Y5RW-5TViWQbbX2nH (edit1: updated to have until day 4 doubling of initial vaults)

edit: I am no expert so I just left the default to numbering so many variables have values like 1,543345E-5

if anyone knows how to fix that please let me know!

15 Likes

Hey @SmoothOperatorGR, that’s very cool! Thanks!

I would suggest not continuing the doubling rate of vaults beyond 7 days though, perhaps tapering off the curve a bit before. I mean, I’m totally open to ideas of plausible growth rates, I’m just at the moment thinking that doubling every day only happens for the first few days or so.

Also, extending too many days with constant d and u might give a bit unrealistic results, since in reality all variables are affected by the changes in the network. So probably best to limit the days simulated with that simplified calculation.

About the numbers, don’t worry about the format, it’s perfectly readable IMO :smile: (I’m sorry if you already know this and I misinterpreted, but just in case: the notation is a short to cut out the zeros, so 1,543345E-5 means 0.00001543345)

If you want to fix it you’d probably have to increase column width and perhaps also increase number of decimals shown. I don’t think it would improve readability though.

I can have a look at it when I’m at my desktop :slightly_smiling_face:

Thanks again and nice initiative!

6 Likes

@oetyng This is really interesting and great work! I was hoping someone would start a deep dive into this area.

@SmoothOperatorGR thanks for the ods!

Looking forward to playing with the numbers.

Cheers

4 Likes

updated as per request

3 Likes

I would like us to collaborate to make a spreadsheet that would be as close to projected reality, please make a projection of how the things will go every day from launch and also at full estimated capacity what would be the numbers

also it would be nice to make different version based on different logic of outcome!

edit1: I would also like you to further theorize all of the variables and how they are connected!

ty you for you great post! was looking for something like that and apeared just the right moment!

6 Likes

If we are wasteful and demand decs ( decimal divisibility), 128 bit resolution of the entire safe coin supply allows for 1 micro-zepto SC transactions… :sunglasses:

P.s. I still like divs.

7 Likes

yeah why not have the possibility for the biggest number and the smallest division? maybe that way some things can be calculated in the smallest amount like 0,000000000001Wh can cost 0,000000000000001 safecoin so computers can calculate things in much more detail!

4 Likes

This has been a long ongoing discussion and there was a poll recently on the topic. I suspect that for now, test-safecoin will be nano – and during the testing phase we will get some better idea about whether it is necessary to go for full 128bit or if 64bit is good enough. I don’t know if there is much if any real trade-off though (aside from javascript not being able to cope with it). So quite possible that real safecoin may end up as 128bit.

5 Likes

This is awesome! Really cool to see people getting into the details!!

Also worth adding the number of new vaults depends on the disallow rule (which is not specified in this model), so there may be times where the network can’t grow very fast because it doesn’t need to. Maybe doubling the vaults in one day won’t be possible.

v data percent stored of available

RFC-0057 aims for between 0% and 50% of vaults to be full (which is not the same as data percent stored of available but is similar). We can use this to decide roughly what are valid values for v which also flows into vault count and section count because the network may not accept new vaults unless they are needed. This idea is for rfc-0057 so maybe doesn’t apply here, maybe the disallow rule will be different in this model. But the relation between total vaults and full vaults is worth considering.

GETs per day

How about PUTs per day as well? This will affect the recycling and affects percent stored and maybe also total vaults. It makes it more complex but it seems necessary to know whether the overall number of coins is growing or shrinking (ie not just what the accumulated reward is).

6 Likes

2.5 Exabyte of data created each day… So 2.5 trillion PUTs… yikes.

I suppose dedup will reduce that by about 80% since most people just repeat what they hear/see from someone else. :grin:

"Over the last two years alone 90 percent of the data in the world was generated. "

That’s an interesting stat and growth rate for SAFE. What does the future look like when every two years, 90% of all data ever created, was just created?

13 Likes

lol ya for years we have been looking for that no-fail HODL coin that would be assured to just keep doubling in value. Safecoin might end up being just that if demand for data storage doubles that fast. Would have said ya right there is no such thing as such a lucrative low risk investment otherwise. Although gotta wonder how people will price that in. Will they be willing to pay 2X todays value because they know its only 4 years until its 4X todays actual use case value? Should be very interesting (and profitable!)

3 Likes

Going off-topic here … but this is one of the reasons I’ve been a hodler for so many years. I would point people to the “Mises regression theorem” as not an absolute, but more of a sliding scale … with coins that have no side inherent use-cases on one end and those that have multiple inherent use-cases on the other. The more the number of inherent use-cases the more the coin will favor adoption as a form of money.

This is aside from each coin’s qualities as a form of money. So for instance, Ethereum must be used to create contracts on the Ethereum network … so it has a built-in inherent use case aside from use as money.

IMO, Safecoin not only has the best qualities as a form of money relative to other crypto’s but also will have the most number of inherent side use-cases.

All told I suspect these side use-cases will strongly bias the market (long term) in favor of the seller and thus make it very dangerous for traders to short or pump and dump safecoin.

No matter what it will be a great experiment.

I won’t reply further here on this as it’s off-topic.

9 Likes

Absolutely, I’ll take up your offer on this.

Limitations
We have to realize the limitations of every given simulation.
We have a mathematical model, which in itself always has a certain degree of error.
Then we build a simulation with it, and we must understand the ramifications of it.

For example, the first one we used, which you put into the .osd file, in practice took a snapshot of one moment, and then copied it over to subsequent days, without feeding back the represented changes into the system. You could generally say about that: Only for a very short time span simulated, and only for a limited range of the other parameter values, will this give us meaningful information - and that is also only within a limited range of possible information that the model could unveil.

That limited range for us, was a basic sanity check. Numbers didn’t go completely off the charts for some chosen values. So, that is a good first step. :slight_smile:

The more meaningful information we want to extract from it, the more effort in building the simulation it needs.

Orientation
First, I would aim at verifying the usefulness of the model, by incrementally refining the simulation, as to prove a specific claim about the model system with regards to the modeled system.

So this here is about “dropping the needle on the map” and working outwards from there.
First things would be to try understand what the map is about. What are we even looking to achieve?
Then we must see if we seem to be anywhere close to that, where we landed on the map. “Close” will also be quite an odd estimate, as with only one location on the map we don’t know about the scales. So dropping another needle, gives us a relation between those different locations (analogy for models).
(Now, the previous farming reward systems actually does give us a couple of locations.)

If at any time, the simulation shows that the model is lacking in some fundamental aspect, then it might be time to switch to another.
As an example:
The conclusion I have made from simulations on the RFC-0057 model (among those the experiments performed by @mav) is that it is lacking in at least one fundamental aspect. This is just confirming what is already known (it was meant to be a temporary model, to be used for the test networks) - so it was completely expected.

Okay, so all that is a bunch of abstract talk. But we will get concrete :slight_smile:


So, I think you are talking about either tweaking the model, or choosing a new one, and so those things absolutely are both part of what I want to do - it ties in very well with what I described above.

Definitely, this is a big part of building the model. We must understand the rationale of every variable, why is it represented and to what extent we achieve our desired property of the model by doing so - and what unwanted side effects it generates.

Thank you for engaging! This is one of my favorite areas to be working with, so I’m very happy to see that it is also appreciated and useful.

4 Likes

Yes, definitely. I think when there is a need to use the (any) farming reward as a way to derive conclusions on the network evolution, this has to be accounted for, for the conclusion to be of value.
And also, if we include assumptions on the network evolution, in the design of a farming reward, then we must be aware of this simplification, so that we know the ramifications of the output.

Just a small thing: I corrected the .osd file, it had the variables off by one on the rows. So it’s d data percent stored of available. v would be the vault count in that doc.

There’s been different ideas about the disallow rule and if it is desirable or correctly designed.
I think it was @tfa that argued for changing it, with motivation that the assumptions of its Sybil attack prevention efficiency was not correct.
I’m not sure either of those ideas were fully refuted.
My thinking is that any disallow rule that might exist, is necessary to take into account in more advanced simulations.
However, as to first get some basic orientation for a farming reward model, I think we can leave it out for the moment, and try chisel out fundamentals of a system that is as simple and accurate as possible with regards to achieving its goals. That’s of course an iterative process, so I don’t mean to say we will get that answered before thinking about disallow rule, but I think there might be many other things in any given model, that must be roughly oriented first.

The “full vaults” part is definitely an important and interesting thing which I think we will get into. Do we want to incentivise full vaults, or some other level of storage used? How do we do that?
In RFC-0057 it is more profitable to not be full, I don’t know yet what consequences that might have, if any, or what we want to achieve by that design.

Definitely, next steps would want this represented.

2 Likes

Farming reward variables

Explanations and questions

Here I go through the variables, and reason about all of them.
I will probably be overly explanatory, but I prefer that over not being inclusive. I’m hoping that anyone can pitch in with whatever thoughts they have around any part of this.


Variables

d = data percent stored of available
u = unfarmed coins
s = sections count
m = 1 / s
n = neighbor sections' median vault count [1, 200]
x = 1 / n
p = ds + 1
g = ln(p^3.2)

Farming reward

R = gxmd(d + u + 1)

Store cost

C = 2R


d = data percent stored of available

This indicates the storage supply/demand on the network, by showing how many percent of provided is used.
When we move closer to 1 (100%), demand is increasing faster than supply, when we are moving closer to 0, supply is increasing faster than demand.
100 % filled could be true for 1 node and 1 GB, but also for 100 nodes and 100 TB each. So this variable does not take actual storage size into account.

The idea of including this value, is that we want the network to adjust the amount of safecoin paid, based on the demand/supply of the storage, therefore stimulating already connected nodes to add/remove storage, as well as leaving or joining of nodes in the network.
This variable assumingly gives the network a certain capability of assigning value to safecoin based on scarcity of storage. The network adjusts payment for a (relatively) fixed amount of data (]0,1] MB), based on market forces. The bounded range for possible values of data size covered by a PUT, is what makes this a rough valuation of storage, in terms of safecoin per storage size unit.

Questions:

  • Do we properly capture supply and demand by this variable, and using it the way we do?
  • Is it desirable that the network reflects market estimation of storage value in the reward?
  • If so, is it satisfyingly reflected, or do we want to do better?
  • Are there any reasons to consider the assumptions above to be wrong?
  • Is it necessary to include this variable in the reward?
  • What do we gain by including it?
  • What do we lose by not including it?
  • Are there any unintended consequences of including it?

u = unfarmed coins

The theoretical max of ~4.3 bn coins will exist as farmed and not farmed - i.e. unfarmed. (We include the ICO coins in the farmed category - they were pre-farmed.)
As this model employs recycling, a healthy network that is being actively used, would never see all coins being farmed. The number of unfarmed coins could even increase at times.
This is due to the fact that we recycle the coins spent on PUTs (uploading data to the network) - and transfer them from farmed to unfarmed.
By including this variable the way we do, we represent scarcity of safecoin. The reward is decreased as the level of unfarmed coins decrease - i.e. scarcity grows.

The idea of adjusting reward based on scarcity is partly to dampen any trends of depletion, and it also prevents unfarmed coin from actually depleting.
It seems it is also reflecting the market valuation of the coin, in the sense that if there is a higher consumption of data, than that of storage, it indicates that users value the coin too high to be prepared to pay the storage cost - hence giving a decline in recycling of coins.
Since storage cost C = 2R, if we se a downwards effect on R when amount of unfarmed coins decrease (less is recycled while accessess to data is still in demand, i.e. new rewards are constantly paid from it), it means we will reach an equillibrium where C is low enough for users to be prepared to spend safecoin to upload data.

Questions:

  • Is there any reason we would not want to adjust reward based on scarcity of the coin?
  • Is it necessary to correlate R to scarcity of coin, when we have recycling of coin?
  • What positive/negative effects do correlation between R and scarcity of coin give when combined with recycling, and when recycling is not used?
  • Are there any reasons to consider the assumptions above to be wrong?
  • Is it necessary to include this variable in the reward?
  • What do we gain by including it?
  • What do we lose by not including it?
  • Are there any unintended consequences of including it?

s = sections count

The number of sections in the network.
Due to the bounded range of possible members in a section (given by the split & merge rules), it is a fairly good indicator of network size in terms of number of nodes.
We assume that network size is an indicator of safecoin value by this logic:
When few vaults are running, we assume there are little economic margin associated with it. Many machines running, would indicate that it is very attractive to run a machine, and thus we assume that it is lucrative, with high margins.
Additionally, we assume that a larger network is an indicator of increased adoption and breakthrough of the technology and evidence of its larger usefulness and value. We don’t need to know in what way, the sheer increase in number indicates that it is more useful in any number of ways, collectively making it more valuable to society, and thus any currency which is required for utilization of the network, is more valuable.

A higher section count is desired as it supposedly increases performance and security.

More sections means more elders so better distribution of consensus and workload (assuming elders is constant and not proportional)
More sections means less coins reside in each section so is safer from attack and less need to expand
More sections means more total age (more events to age from) which affects security and reward distribution

As reward increases when section count drop, it also increases motivation for new nodes to join, and therefore section count to increase.

The idea is to reflect market valuation of the entire network in the reward, and stimulate security.

Questions:

  • Are there any reasons to consider the assumptions above to be wrong?
  • Is it necessary to include this variable in the reward?
  • What do we gain by including it?
  • What do we lose by not including it?
  • Are there any unintended consequences of including it?

n = neighbor sections’ median node count

This is the median count of nodes in all of a section’s neighbor sections. It is thought to be a good enough estimation of median node count in the entire network. The lower the number of n, the higher the reward R, thus increasing motivation of inflow of new nodes when nodes for some reason are leaving the sections. When sections decrease in size (i.e. their node count decrease), their security also decreases, therefore it is motivated to inversely correlate the reward R with n, as to stimulate new nodes to join.
If node count drops drastically and uniformally across neighbours, so that they begin to merge at the same time, this effect would disappear for a while, as the median node count again is high after merges. If node count would continue to decrease after merges, it would again kick in.

Questions:

  • Are there any reasons to consider the assumptions above to be wrong?
  • Is it necessary to include this variable in the reward?
  • What do we gain by including it?
  • What do we lose by not including it?
  • Are there any unintended consequences of including it?

m = 1 / and x = 1 / n

Both of these are just a different representation of s and n to make the definition of R visually simpler.


p = ds + 1 and g = ln(p^3.2)

This is just a way to manipulate the curve to look in a certain way (specifically: to get R = 1 nanosafe when - according to our guesses now - parameters might indicate world dominance).
It is based on the assumptions that it is desirable to have a specific R at a specific network size, and that with a bigger size we want R to be lower. Those assumptions has not yet been proved I think.
Additionally, it is tied to the assumption that about 100 billion vaults would indicate absolute world dominance of this technology.
Estimates of around 75 billion IoT devices in 2025 • IoT devices installed base worldwide 2015-2025 | Statista, could indicate that 100 billion vaults is an insufficient long term estimate of what world dominance looks like.

Questions:

  • Is it desired to avoid such constants and hard coded values?
  • Is it possible to avoid it?
  • Can we make it more dynamic?
  • Is it good enough to rely on network upgrades for adjustments to these?

Summary

It seems that several of the variables try to capture market valuation in one form or another.

  • Are they in fact reflecting different aspects of valuations, or are they in the end overlapping?
  • Do we need to capture it in all those ways?
  • Does this combination of variables give additional value to the model?
  • Would a subset of these give more, same or less value to the model?
  • Are there any other aspects of market valuation we could or want to capture?

Also, we are including the concept of security, and try to stimulate it as well with the reward.

  • Do we want to stimulate security with the reward?
  • Does current use of variables do a sufficient / insufficient job at this?
  • Are there any other aspects of security we could or want to capture?

And finally:

  • Are there any other influences we would like to include in the calculation of reward and store cost?
6 Likes

I wonder if it will react properly to the fiat value of safecoin quickly dropping or gaining by huge amounts unrelated to storage or network size, but instead by factors like bitcoin price.

1 Like

Market valuation

It seems like we have 3 different market valuations influencing the reward.

  • d - Storage (perpetual, encrypted, backed up)
  • u - Safecoin
  • s - SAFENetwork

Scenario: Fiat value of safecoin sharply fluctuating

If fiat value of safecoin, was to for example rise quickly, due to bitcoin price let’s say, one resulting behavior I think we’d expect to see, is that PUTs would quickly drop - since they would be considered too expensive in fiat terms.
Another effect is that advanced farmers, waiting for these market imbalances - arbitrageurs - might introduce a lot of storage capacity and / or new nodes to take advantage.
(NB 1: Introducing new storage capacity in an already running node, I think is not possible without halving age currently, we assume here that it can be done without age loss - since it seems that would be desired?).

At any given moment, there is a constant flow of GETs and PUTs, which together take and give back to the unfarmed balance. If the PUTs suddenly drop, we will start seeing u decrease.
If storage capacity and / or new nodes are suddenly increasing, while PUTs have dropped sharply, we get a multiplied effect on price.
(NB 2: Adding new nodes while PUTs are dropping, and storage capacity is increasing, might not be possible, depending on disallow rules.)

Effects

At minimum, we would see the combined effect of d decreasing and u decreasing.
How fast depends on how agile the arbitrageurs are, and also how responsive user PUT behavior is to fiat value of safecoin. If we assume infinite arbitrage capacity idling, and immediate user responsiveness, I would guess the reflection of fiat price in R would be virtually instant.
In reality PUTs resulting from chatting, cat picture uploads and similar consumer product usage, could be quite slow to respond. Appliances and automation might be better suited to adjust storage consumption.
(If a disallow rule would not be in place, we would additionally have the effect of new nodes, which would see R decrease both by node count n in sections increasing, as well as section count s increasing.)

Conclusions

If this holds true, I would argue that decreasing any friction for these two, would help the responsiveness of R with regards to fiat value of safecoin.
For d it would be done by making it easy to increase storage capacity of a node, without losing age (i.e. no restart needed).
For u it would be harder, but things that could perhaps help is metered usage, like with electricity, so that apps schedule their usage to periods of lower costs.

If indeed there is some truth to the idea that current disallow rule is not an efficient Sybil attack preventor (discussion between @tfa and @maidsafe - correct me if I’m wrong), then perhaps another advantage of removing it could be to increase R responsiveness to fluctuations in fiat valuation of safecoin.

I think overall, this would be an interesting scenario to simulate for various parameters, to get a better feeling for how responsive R would be.

3 Likes

A hypothetical, could this be more than 100%, eg if a datacenter goes offline and suddenly redundancy is not enough, then the network would be considered ‘more than full’, ie even if more storage came online the network would still be 100% full.

Of course ‘as measured’ it would be 100% full and not possible to go higher, but wanted to put the idea out there for this stressed condition and see what people think of it wrt to the model in this topic.

One thing (maybe it’s just me) is to be quite clear the difference in ‘margin’ vs ‘value’. If I understand correctly margin is like ‘profit’ or ‘excess’ whereas value is like ‘fundamental’ or ‘inherent’ benefit underlying it all. Does that sound right?

Also by ‘few vaults are running’ do you mean the total vaults in the network? Is ‘few’ meaning ‘compared to zero vaults’ or is it ‘compared to the number of vaults five days ago’? I’m not so clear on what ‘few’ means.

2 Likes

I don’t see why it won’t be possible… Unless vault sizes are standardized and hard coded.

2 Likes