RFC 57: Safecoin Revised

Sorry poor choice of term no I was not referring to the PAC proposal with “bidding prices”, only to the market change in relative prices.

Fully agree. It does also point out that the interaction of a few simple rules together has the potential to allow a complex system to emerge - just that we cannot know what properties that emergent system will have by just reasoning about the simple rules that it is based on. We have to “go up a level” and actually analyze the complex system independently to understand what properties it has.

I am just catching up on reading after a long overdue holiday have yet to fully understand the PAC idea not read that whole thread yet, but from what I have quickly understood so far it feels like the right direction - impressive work! If I am lucky enough to be able to contribute any ideas I’ll post them over on the PAC thread.

One take away is that it is extremely difficult to design emergent complex behavior that we desire in the final Safe Network economy from simple/not so simple rules, even (or especially) when we think they each individually make intuitive sense. I suspect we will need a few trial and error iterations over successive test net economies with almost real value on the table for participants to make the emergent complex system “real” (perhaps a Safecoin lottery or open ended “this test net Safecoin might become the real net/Safecoin” promise) and then study the emergent behavior closely while simulating transitions from a young fast growing economy through to middle and mature age (while simultaneously moving in and out of depressed/overheated/manipulated by well resourced attackers type markets) to know for sure if our emergent complex Safe Network economy can transition gracefully between various stable attractors.

3 Likes

But this misses what is ‘optimal’? Is it most efficient vaults? Is it most diverse userbase? Is it lowest latency? Is it highest bandwidth? Is it most abundant electricity supply? Is it maximum rate of content creation? Is it something we don’t even know about yet?

There is no ‘optimum’ result for the network. Even saying something as simple as a network of 1000 vaults is less optimum than a network of 2000 vaults (ie optimise for pure growth of node population) is not a useful way to frame the topic.

I don’t see how ‘the network’ as an actor can define optimum, whether that be by the developers, config managers, farmers, users, an AI, I just don’t see how the network can be an isolated actor with some Platonic optimum.

What is the ‘optimum’ price for CNY:USD? Or EUR:GBP? Is that a fair analogy?

4 Likes

All this talk about complexity reminded me of Gerd Gigerenzer (videos attached) who’s research is about acting in the presence of complexity and uncertainty. When our knowledge is limited for either or both of those reasons, complex solutions can’t work because they require accurate knowledge of the state which is unattainable. Instead, we need to develop simple heuristics that can work with limited and uncertain information.

To apply the above to safecoin economics, instead of understanding the whole situation (it’s a futile attempt, sorry) or trying to foresee the future (obviously also impossible) we need a couple simple rules that will bound the state within well-defined limits.

I would start with these two that I think I have already mentioned in this thread:

  1. price should approach infinity as free supply is approaching zero
  2. reward should approach zero as free supply is approaching 100%

Anything that can’t reliably achieve these will fail so there’s little use talking about more subtle details until we have a robust (trivially correct or easily provable) solution.


A 20-minute TED talk:

A longer talk at the Max Plank Institute:

The slides for the above: https://ethz.ch/content/dam/ethz/special-interest/gess/chair-of-sociology-dam/documents/icsd2013/0_3_gigerenzer.pdf

Another one (very quiet audio):

15 Likes

I intuitively agree with this.

To reflect on it in simple terms, humans have relatively simple drivers; the need to eat, drink, sleep, breath, etc. However, we may desire to do much more, some a common desire, some unique. Yet, we all function along side one another, towards an implicit common goal (to stay alive at least?), while trying to attain our own goals.

6 Likes

Well, if we’re open to interpretation here :wink: then what he is saying makes perfectly sense as a basis for why we shouldn’t try master the complex system, but leave it to the adaptive toolboxes (Gigerenzer’s main concept) of all network participants.

As you say: since our knowledge is limited, complex solutions can’t work. The least complex of them all is to tap an outside source for the information. (The security necessary is what people here mistake for complication of the economy by assessing participants. But those are separate things; economy, security. The security is the same sort of security, and level of complexity, that we have in any other part of the system, it doesn’t add any to the magnitude in the system.) So, when comparing complexity of economy, participant assessment is simpler than designing a universal strategy (that actually does what it is supposed to do) for the network to execute.

You and I can interpret what Gigerenzer is saying, as supporting both of the views, the reason for that is that it is a very high level and general theory by a professor in psychology, it can be applied on many levels, which (as you can see above) can even support opposing views.

In contrast, the work of Steve Keen is very specificly about the exact thing we are talking about here. I think, even though the theories of Steve Keen by no means are some universal truth (there are many others out there that we can and should refer to), I would say it’s a lot more helpful and useful to discuss the specifics here and relate to relevant studies and science on the specific topic, than some general psychology, which can be very arbitrarily applied to the problem.

Not that general psychology doesn’t have its place, it definitely does (and I can support my arguments on the theories of Gigerenzers as you see here), but to entirely replace the former with the latter doesn’t seem like a meticulously scientific approach.

Yep, exactly in line with what we’ve been saying. :+1:

Again, exactly what we’ve been saying. :+1:

Well, you say so, confidently. But so far, the discussion has been about exactly these simple rules you suggest.

A couple of posts above yours, there’s even citing of a very specific work around the variability in mechanisms that drive the outcome when trying to govern this. The ideas of Gigerenzer applies here as well; that it would be more suitable to allow for the adaptive toolbox of entities (in this case network participants) instead of a universal strategy – which the above represents.

As Gigerenzer says:

The basic idea of the adaptive toolbox is that different domains of thought require different specialized cognitive mechanisms instead of one universal strategy

(My emphasis)


I would say there is no controversy at all, no indications whatsoever from anyone or anything I have ever read on the subject in this forum, that the two points you describe above, which are parts of @Seneca’s proposal, my explorations of live economy, @mav’s simulations, as well as in RFC0012 and (fundamentally, even though disabled for test purposes) RFC0057, would not reliably be achieved.

Rather, that question has been surpassed long ago by concerns for the wider consequences of that approach.

I would be very interested to hear you address the arguments that have been provided in multitude (last one two posts above yours here) about why there is a reason for concern with that approach.

Since there is IMO absolutely no controversy that these points can be reliably achieved, I don’t see that there is little use to discuss further. We’ve been past that point for long now.

4 Likes

Frankly, I’m not open to that because not only your more metaphoric interpretation is a completely different idea from Gigerenzer’s but it also goes directly against its primary, fundamental meaning.

Let me note Gigerenzer is not much in favor of behavioral economics so it would be a mistake to interpret his words in support thereof. I’m a bit confused because you do talk about the fallacy of assuming the macro can be predicted from the micro and yet you seem to implicitly suggest deciphering the connection. However, Gigerenzer’s point is that dealing with complex systems successfully isn’t about understanding or modeling them. Hence, my two points were not about theorizing about how things may work in safecoin economy but about the necessary conditions for the operation of the network.

So, I’m open for any ideas for negotiating price but only as long as they happen within an extremely simple, rigid, and final framework that guarantees the two crucial points I outlined.

Yes, bring on external sources. Add a handful of parameters to shape the steepness when we’re far from the extremes of occupancy or to adjust the base price for the empty state (I was wrong to demand “zero” price for a completely empty network but it’s unlikely to ever happen anyway). Just don’t think we can get away without setting one or two simple bounds to hold things securely within meaningful operational parameters.

3 Likes

I kind of guessed this by your way of formulations, that’s why the blinkey smiley :smiling_face:

Nonetheless:

It’s not metaphoric, it’s another level of abstraction.
You apply the theory to the system, I apply it to the components of the system, which gives very different results.
There’s nothing in what Gigerenzer says that constraints the level of abstraction for which the theory is valid. On the contrary, he talks of agents as institutions or individuals, and much of the research has been carried out on individuals. I apply the theory very concretely on that level of abstraction, that’s not “more metaphorical”.
Compare to this:

They proved analytically conditions under which semi-ignorance (lack of recognition) can lead to better inferences than with more knowledge. These results were experimentally confirmed in many experiments, e.g., by showing that semi-ignorant people who rely on recognition are as good as or better than the Association of Tennis Professionals (ATP) Rankings and experts at predicting the outcomes of the Wimbledon tennis tournaments. Similarly, decisions by experienced experts (e.g., police, professional burglars, airport security) were found to follow the take-the-best heuristic rather than weight and add all information, while inexperienced students tend to do the latter.

And this is what I mean is problematic when taking such a general high level theory and consider it valid and true on one level of abstraction only. It might be, but neither you or me know that in this specific case.

I don’t think so (obviously, that’s why I expressed the view). At least you should try back the claims up I think :slightly_smiling_face: Just saying things are in a way doesn’t make them so.
That’s a problem when discussing in this topic I believe, some seem to consider it less important to show why their opinion holds true. It’s so hard to get anywhere for real when we do like that.

That’s right. I would say the closest thing to not trying to model or understand it, is to not design an ultimate strategy for the reward and store cost, and let it be a dynamic result of participant assessment.

It doesn’t mean I don’t personally think boundaries are useful, it’s just a logical conclusion.

I can see how it is confusing, because I mix the two theories. I wouldn’t say I’m necessarily deriving the macro from the micro. I’m still starting out at macro. That the method includes micro components is unavoidable, that’s not what Steve Keen is talking about, it’s about not assuming the micro effects playing out in macro when scaled, and that the system needs to instead be built from macro perspective, finding the micro configurations that lead to the desired macro, without assumptions on those micro conditions.

I found it interesting that there is a theory that would support the feasibility of PAC other than the idea of the rational agent (look how closely the Wimbledon example above illustrates this), especially since some of the arguments against participant assessment (as opposed to network universal strategy) have in essence been about the disbelief in the rational agent.

See, I try to test what ways we can understand and combine the different views. I’m not locked to a certain view, I’m trying to see what views there are, what problems they have, and more so, I try to do it with open and investigative discussion. Whenever someone makes a certain claim, I try it out - regardless if it seems intuitively true or not to me personally.

For example, I think resource-as-a-proxy will work sufficiently well in a young / medium-young network. The problems of naively increasing reward when supply goes down and vice versa, appears when large part of the population is having a major part of their economy (income and expenses) on the network. That’s probably going to take a while.
The problem though, is knowing when. No one could predict the bitcoin growth curve as you said yourself. These kind of decisions make implicit predictions when we say “nah, that’s not going to be a problem”. Well, you don’t get to say that without having made some sort of prediction, which in essence is pure guessing.
Additional problem is: will it even be possible to switch to something like PAC (or whatever) when everything has settled with resource-as-a-proxy and everyone depends on it?

OK, great. I’m still genuinely interested to hear your ideas and thoughts on how that concretely would be done, with all concerns addressed somehow (you know, either account for them or show why they don’t hold).
People with different ideas, and open to ideas, are those who would be able to make most progress together here I believe.

2 Likes

It is some or all of those things and more. Optimality is ‘defined’ more or less by the goals and objectives we specify for the network. First, you start with a set of network objectives and decide if these are to be minimized or maximized. Your health metrics are a good start. Even target setpoints within an acceptable range of operation can be recast as a minimization problem. These can be guided by a collection of closed loop control algorithms, each with their own range of parameters or traits. A chosen optimization algorithm (genetic algorithms, gradient descent) adjusts network controller parameters and chugs along day by day to achieve those goals. Then you have a closed loop controller guided by evolutionary optimization that is embedded within the network computations. Ideally, the controller is given some training time during beta before letting it loose with real safecoin.

A very sophisticated example of this is Deepmind’s AlphaStar neural net for Startcraft II. I’m not saying we need anything this grandiose but it gives you a powerful example of how the network controls can be set up to perform evolutionary optimization on themselves in an asynchronous and distributed manner.

I disagree. I don’t see why you think your example it is not a useful way to frame the topic. Obviously there are other objectives than pure growth of node population, but that is one example of a growth metric that can back feed to a network controller that is guided by evolutionary optimization. If you would have specified a single objective to maximize node population and disregard all other network factors, then the network control parameters that yield 2000 vaults are more optimal than those pushing for 1000 vaults by definition. A more realistic example is to require that vault count be maximized, while at the same time minimize latency, while at the same time maximize bandwidth, while at the same time minimize PUT cost, while at the same time minimize GET price, while at the same time push the network purse to 50% of total safecoin supply, while at the same time maximize GET rate, while at the same time maximize PUT rate, while at the same time push vault size distributions or nodal ages to follow a normal distribution, while at the same time maintaining all constraints and parameter bounds, while at the same time etc…

The “platonic optimum” set of independent control parameters are those which maximize or minimize your set of goals and objectives while staying within any imposed constraints. We can’t define the optimum outcome values, nor can we define the optimum values for input parameters, but we need to define the max/min objectives and the constraints on input and output, and we can define desired targets if they are known.

No. What is your goal? Maximization or minimization of those ratios? What are the control parameters which affect those ratios? Are you instead considering these as controls parameters to maximize some other objective?

This is also a good analogy as to why any network control system will be required to have a set of objectives or goals that it continually seeks. It can’t just flounder aimlessly about. Attainment of those goals by the network defines “optimality”.

I also want to point out that it would be nice if we are all on the same page when we talk about “price”. I think we are in this thread for the most part but not always it seems. For the moment can we agree that every time “price” is mentioned, the default definition is price in terms of safe coin aka the “safe price”? If we want to discuss outside market forces, let us call it “fiat price”.

3 Likes

I’m not sure if you’re talking about optimality as if it is something attainable. It plainly isn’t. It may or may not exist but, frankly, that’s irrelevant as long as we can never learn what it is, and we can’t, not even in much simpler domains. So, just like in the markets, where the independent actors together try to find the “correct” price but never quite reach it (maybe this is what you meant?), we’ll need to depend on simple but robust heuristics. What we definitely shouldn’t seek is something like the Starcraft bot that nobody, not even its makers, can reason about.

The difference between optimality and “works fine” is that of equations an inequalities, where the former are about finding the precise (“optimal”) answers while the latter are about finding the ranges within which we are okay. While the former are “neater” in theory, the latter are more useful in practice, and even more so when we’re dealing with uncertainty or complexity.

Based on your “platonic optimum” paragraph, you seem to be talking about this but call it optimality. I’m not quite sure, but please no Starcraft.

It’s fine to apply things at any levels of abstractions as long as we have guarantees for correctness (not optimality) at all of those levels.

However, if we can’t guarantee (not wish or hope for) that the system will stay within healthy operational parameters (we won’t run out of either coins or space), that’s a sign that we don’t have the right rules at the level of abstraction that’s responsible to deal with those parameters.

I’ve written about ideas concerning that here and there but I don’t have anything specific at the moment, sorry.

I would prefer a free market where vaults would compete for content while seeking to maximize their earnings while also protecting the network from filling up. However, that’s not how chunks are assigned to vaults and I have no good ideas about how to go around that. Unless…

Maybe if sections assigned chunks to the N lowest bidding vaults out of M that are considered? Greedy vaults would remain empty so they would be incentivized to ask a fair price, vaults that are getting full could stop receiving chunks until the others caught up, and users unwilling to pay up would not get their stuff stored.

EDIT: Clever vaults, if they couldn’t grow, would set their prices so as to always keep some free storage around for times when the rest of the vaults are also getting full, lest they miss out when the price got higher. It’s similar to airlines that ask more and more as they have fewer and fewer free seats left.

The strength of this model is that it needs only two components:

  • Sections need to keep track of the bids of a pool of vaults.
  • Vaults need to have a method to set the price.

At this point, we have a free market and the prices will be set accordingly. Please note that the chunks will get stored at the lowest possible price and the network will be protected from getting full as long as most vaults are working according to their own interest and the precise method they use to set their prices is not very important (people will try to come up with better and better algos for that).

EDIT2: I think it’s just PAC without all the complications. I think I’ll just copy/paste it there. I’m sorry but I’m quite busy recently and I missed out on much. Though I even commented on that thread a few days ago :man_facepalming:t4:

We’re on the same page, I’m talking about PUT price and safecoin. @oetyng suggested, or so it seems, incorporating external information into the price and I was referring to that.

4 Likes

I’ve been using ‘storecost’ and ‘reward’ as consistently as possible instead of ‘price’, and whenever I read or write ‘price’ I take it within the context it’s being used. If I’m talking about usd:safe price I try to say explicitly exchange rate, but to me price is too easy to use ambiguously so it’s no point trying to pin it to any one thing; some people will use it differently despite the group intention. If clarity is needed simply don’t use the ambiguous word ‘price’, probably storecost is the better word to use in most cases.

2 Likes

It seems to me that the network so coded to maximize or minimise certain priorities may have a similar drive/disposition as the hypothetical “paperclip maximizer”.

I’m not a great thinker and rely on intuition when looking at problems I’m not knee-deep involved in … it feels to me that we can certainly create some basic set of algorithms here, but unless we are intending to change it as the demands of people change (and so also the developers must find a way to determine what these demands are), then it just can’t work. Again I refer to the “Economic Calculation Problem”

All value is subjective I think. If the network is determining value and time based on indirect observations according to a non-AI set of hard-coded algorithms, then it shall inevitably be overwhelmed by subjectiveness and fickleness of reality which will alter it’s preferences causing the network to poop itself … lol

Yeah, I get that - I know the network has it’s own sense of time, but what I am suggesting is that the network must find a way to calibrate it’s sense of value and time with that of the average human in the real world … yet with only an indirect and very limited measure of the real world that calibration may be impossible.

But I dunno - this is all just my gut talking.

1 Like

If we switch to passing through payments for PUTs directly to the vaults that store the relevant chunks (as I hear is already the preferred paradigm) there is no need for the network to worry about prices anymore. Users can set a price they are willing to pay and vaults can set a price they are willing to accept (this latter part would only work with something like my “pick the lowest N bids of M>N prospective vaults” proposal).

We would have a free market, something we as a society already have a bit of experience with, and it would move the complexities and related pitfalls of price negotiation from the network to where it belongs: the people who occupy the supply and demand sides of the service. In the meantime, the government network can worry about catching the criminals vaults who don’t abide by the contract.

1 Like

It’s not either/or. The network absolutely needs both IMVHBCO. Choosing one or the other is like making an ambidextrous person chop off a hand because they are equally capable when using either. The truth is they are twice as capable with both and can accomplish things with both hands that are impossible if they use only one or the other. (Example: Rub your belly and pat your head at the same time.)

1 Like

I do get that. I argue for rewarding GETs at other places because it is necessary to encourage opportunistic caching and proably also for moving coins from the pool into circulation. Burning some of the PUT payments could move coins in the other direction.

2 Likes

sry for not directly helping, can I have a eli5 of what are we debating here?

1 Like

I would argue that the network needs to be fully autonomous and with a smart algo to decide the best price for puts and gets.

if there is free market, there can be human movement to make more for each get that then will lead to huge PUTs price and make the network unusable.

1 Like

There’s no best price, only a price one is willing to pay and a price one is willing to accept. The agents that do the negotiation will be the vaults but I don’t agree that it should be done through one specific algorithm that is forced upon the vaults across the world. As I started with it, the payments offered or demanded depend on the economic preferences of the owners of the vaults and a rigid algorithm is unlikely to be able to express that.

A solution for price negotiation is establishing a free market as I explained it a few posts (and weeks) ago, where vaults in a close group would bid for the next chunk and only the best N out of the M in the close group would get to store it and receive payment. On the other side of the transaction, uploaders would specify a maximum price they would be willing to pay, and only if the winners (the best N) in the close group demanded less would the chunk be accepted.

4 Likes

I think there can be a simple algo that balances things.

as simple as:

“need space” > “give more reward for GETs” > “people will hear that they will get good money for joining as a node with storage and get incentive to join”
“have lots of space and lots of new nodes trying to connect and offer more storage” > “put joining nodes on hold and decrease PUTs cost so: 1. the storage doesnt get bigger 2. people get incentive to load more data to the network.”

then with this simple algo it always balances cause: the PUTs and GETs costs will always make people do what the network needs.

1 Like

I disagree on that, the network should be autonomous on the pricing.

where there could be freemarket is in selling safecoin, so if someone wants to PUT 100GB and it costs 100 safecoins, he might go to the free market of safecoin and see the best price of safecoin he can get.

but PUT price and GET price I argue that needs to be fully autonomous and balance automaticly based on network needs

edit: thats because the network and only it knows the cost of the operations and has to keep a balance so it maintains the incentives for nodes to join when it needs storage and incetives when it has too much space and it needs to give a reason for someone to pay the PUT price and make the network bigger

edit2: why bigger network? because the bigger the network the faster, more secure, more powerfull it gets to offer its intended goals

1 Like

I propose that there is a UBI for the vautls and rewards for GETs.

the algo that will decide how much will the UBI and rewards for GETs be will be based on that as the network grows the PUTs will grow in a crazy way. once we get to the point that ALL world uploads data to the network the PUT prices would be enough for a calculated UBI and GET rewards cause we know that the internet moves and uploads huge amounts of data.

edit: I propose a UBI cause this gives an incentive to anyone that he will get a set amount of safecoin for just offering storage which is essential cause one to offer storage he needs to make an invesment into buying hardware and a stable internet connection so with an UBI he knows that in some time he will get back his moneys worth. the GET rewards will be the incentive that in a random way he may make faster money

edit2: and an UBI is an incentive that will make people want to have the vault available 24/7 and the get rewards will incentive for the user to get the fastest hardware and the fastest internet connection also there should be some kind of reward for even caching for the network cause the network needs caching also for rapid speeds-response

1 Like