Perpetual Auction Currency

I think that was the longest message I’ve ever read on here, wow. Maybe there’s a better format for these

7 Likes

All the efforts yourself and @mav put into this makes me wish you both were integrated with the CORE team and could be implementing this parallel to their existing works so when the time comes a few adjustments could be made depending on how it has to hook in or any dependency changes :laughing: , but alas you both probably have enough on your plates with the day job as do most of us devs that take an interest in the SAFE Network. Appreciate the ideas you are putting forth though!

11 Likes

Ultimately an RFC. I think this one is gonna need all the effort the guys are putting in for sure. Probably a big debate to iron it all out. I am still not 100% with it all myself yet, but I am only one person. I do feel we could test this though with not too much effort during beta, so have the network alone mechanism and then a user bidding or perhaps automated vault bidding, in any case it is a load of work and I could not be more grateful to see this effort. What a community

17 Likes

Like I said @mav way above my head, but maybe this helps a little.

https://www.youtube.com/watch?v=4kWuxfVbIaU

Btw should bidding not come at an cost, I mean staking SAFEcoins?

4 Likes

We’ve been thinking of developing some sort of game / test for the bidding idea but not sure yet about the direction. If anyone has ideas about how to gather data or test the bidding idea I’d love to hear.

5 Likes

Reminds me of game theory competitions in my days playing with genetic algorithms (late 80’s). I never implemented this but they were interesting to learn about. (Generic Algorithms in Search Optimisation and Machine Learning, by Goldberg has a very good section on it).

People would have different hand crafted algorithms compete in a software environment, which naturally leads to automatic optimisation and evolutionary algorithm strategies. In a bidding scenario I think this is a very suitable approach as I imagine that is exactly what would happen. Things like GAs are clearly an interesting option to try, amongst other ideas as David has already mentioned, in order to create winning strategies, or to find weaknesses in the environment itself.

I’ve long been looking for an ideal scenario to apply GAs to, and this is certainly one.

So, create an environment which can run competing algorithms within a simulation of the network environment of competing vaults. Automatically vary/evolve strategies and have them compete for rewards, or to stress the environment using a mix of different bidding strategies and ways of varying bid strategy, including self optimisation.

In my day computation was a limit. I nearly got to play with a Connection Machines parallel supercomputer to try and optimise a tricky oil industry problem, but today obviously computation has moved on and it may be feasible to simulate this on a PC.

It’s a big field, which is why I’m not sure bidding is a simplification - although as the network is an emergent system I think it is going to be complex regardless of the reward system.

8 Likes

gotta give a huge kudos to @oetyng and @mav for developing and thinking so deeply about this idea! Like @happybeing it really makes me think of things I studied in my youth when I was just “doing what I love” but later was like omg why did I pay so much money to study something so useless. Now I am inspired to go water that withering bush and see if it will still produce fruit!

10 Likes

@oetyng and @mav, this is absolutely fascinating. Thanks for working through this as far as you have.

The potentials here are exciting. Again, it only becomes possible on this unique network/vault structure, so we’re definitely in terra incognita.

7 Likes

Hopefully this helps. If the Network is broadly distributed in terms of farmers at launch, then this sort of collusion is less of a concern. However, it is more likely that farming will be relatively concentrated at launch. For example, what percentage of the populace would know about and be ready to farm from day one. Think about the type of person who would be primed for this and have the resources (e.g. informational, financial, etc.) to readily participate. In this, first mover advantages would actually apply and carry weight.

Say farming is relatively concentrated at launch. If groups arose that provide a significant chunk of farming, they could enforce their own rules regarding bidding. They may, for example, seek to artificially keep the reward/price low so as to deter others from entering the market. This assumes that farming increases in efficiency at scale. Many individuals would either stay out of farming because of the price suppression or join these pools because it increases their likelihood of seeing some reward. This is what I mean by colluding to keep the size of the pie small in order to have a bigger slice.

Put another way, a group of HBS students were asked whether they’d rather live in a world where they earned $100K and everyone else earned $50K, or one in which they earned $250K and everyone else earned $200K. They choose to live in the first world because we perceive wealth in (erroneously) relative terms.

Since Safecoin can also be exchange traded, how do market forces impact this thinking? Could the market (I.e. mechanism of exchange) determine Safecoin’s price while the Network need only determine the conversion rate for purchasing resources and receiving rewards? In which case, the conversion rate would be dependent on supply/demand for Network resources, which in turn can be agnostic of market price.

Otherwise put, why can’t the Network simply control the exchange rate such that the reward/price for Network resources fluctuate based on supply/demand of said resources? In such a model, human intervention (I.e. bidding) is not necessary. This of course would require setting an initial exchange rate SAFE:PUT and laying down rules for understanding Network supply and demand. You’ve done some interesting thought experiments around that like Polls: How much will you spend? How much storage do you need? etc and Exploration of a live network economy.

The allocation of rewards for providing Network resources could either be fixed (x supply guarantees y reward) or probabilistic (x supply provides z probability of receiving y reward). Although this approach could still see concentration in farming supply due to sheer economies of scale, it at least would remove the ability of individual entities to directly manipulate reward value and allocation.

6 Likes

I perceive there is some reticence against introducing a human political influence into the network, and even if I understand the merits of the bitcoin like automated approach, I believe we still depend on the maidsafe’s judgement, as well as anyone building the future updates. Honestly, I believe that allowing the network to interact with the humans and merge both kinds of intelligences will create a system that is much more adaptable and future proof.

I find really interesting the idea of using GET events to express the opinion of the farmers on the network, be it the rewards or other matters.

This voting system could even be useful for updating the network or making some kind of gobernance layer.

I am in love with the Tezos upgrade by consensus system so I might be biased.

8 Likes

I suspect that there needs to be a network determined bounds (upper and lower) if there is to be bidding. What happens when the network comes close to the coin production limit? The bidders may have some indication of this if they monitor the global supply for sure and may bid accordingly, but those that don’t choose to track this may be taken advantage of - especially if the supply is pushed hard toward the ceiling.

Hence I think some sort of hybrid approach is needed - for the sake of giving the network the most information possible but also to conservatively manage the network.

3 Likes

Yeah, I think one interesting result of this is that your ability to participate in voting is increased the more popular data you hold.
So, the more data you hold, and the more popular it is, the more GETs you receive, and with every GET you are able to include your votes.

So, basically, the more valuable you are to the network, the more voting faster vote updates you get to do. Quite cool IMO. [had to edit that, to be more precise, it can be a very different thing]

Now, it is not entirely clear at the moment how valuable it is to have higher rates of voting. But one thing at least, is that you will be able to follow market sentiment better (less delay), that way having a better chance of being close to an NB when it arrives, thus getting higher rewards.


Reward distribution graph

I was playing around with using a Probability Density Function for reward distribution.
I made a simulation at Desmos that you can find here: https://www.desmos.com/calculator/uwmvssaism

The simulation allows you to loop through the size of a section (60-120 nodes) and watch their (somewhat) random bids and rewards plotted out as (x,y)-coordinates, with x-line being the bid, and the y-line being the reward.
Remember that the Neighbour Bid (NB) is what they want to get close to, and the NB is then split up according to the reward distribution (the sum of all rewards plotted, will be the NB).

There is a slider for the NB as well.

If you want to try a steeper or flatter distribution curve, go down to Probability Density Function folder, and adjust u with the slider.

There are a couple of other bid distributions that can be used as well, where the majority go above or below NB. The one that is used has a large part centered around NB. Still quite many out to the edges though. They all deviate at most + / - 10 % from NB.


Here are some notes from when I implemented it in code:

    // Sorting bids into exponentially differentiated buckets:
    // take diff between bid and NB
    // pipe through tanh (a zero centered "sigmoidal" function)
    // sort into buckets using PDF function
    // the bucket represents a share of the reward
    // every participant in the bucket splits the share between them

    // The aim of using bid-NB diffs is to equally favor closeness, regardless of sign.
    // The aim of piping through tanh is to map all possible bid-NB diffs into the PDF argument range.
    // The first aim of PDF is make reward proportional to closeness.
    // The second aim of PDF is to establish an exponential and continuous distribution of reward.
    // The aim of sharing in buckets is to keep bids from clustering.

    // The collective result of the above aims, are
    // - promotes keeping close to the common sentiment (favors passive bidders)
    // - promotes unique bids by decreasing the reward per bidder as bids cluster in buckets (favors active bidders) 
    // - promotes defectors when there is collusion
    // -- (ie. a close participant is rewarded most, when all the others are far away)

    // ***
    // Higher rewards give more participants
    // but skewing highest reward away from closeness, promotes bid movement - which eventually affects NB and through that attracts or repels participants.
    // So.. it seems skewing is just an indirect way of directly weighting reward?
    // The difference is that skewing promotes those who at that time are helping the network,
    // while directly adjusting rewards for all, relatively, rewards those who are less aligned with network needs.
    // The skewing does not impact the NB as fast as the weighting does.
    // So maybe the best result is achieved by combining reward weight with distribution skew, 
    // as to rapidly affect NB, as well as promote those who are aligned with network needs.
    // (Could the combination of the two reinforce the effects too much?)
    // The bucketing is more attenuated when NB is lower.
4 Likes

Wow, you’ve been busy @oetyng. A lot to go through here since my last post. A few comments/thoughts:

The network doesn’t need to know “why?”, it only needs to know whether the farmer resources (storage,bandwidth,latency, compute, elder counts, etc.) are increasing, decreasing, or constant/steady and what the current quantity is relative to system load or other targeted setpoints.

More is not necessarily better if it is just noise from farmers playing games. A “hard-coded” farming rate algorithm can be adaptive and flexible.

It might be fine to start with. In my view all major resource categories required to run the network should have their own reward rate. These include storage, bandwidth, latency, memory, and compute. In other words, if there is a resource proof for some farmer/vault performance trait, then the network should be offering a price for it.

True. Specifying a target growth rate from the beginning is the naive approach, but it offers a facade of predictability that is attractive to those in crypto space, and offers a simple way to motivate the network pricing algorithms. The optimal way is to have a means for objectively computing the current network growth rate, and then vary all inputs to the pricing function in real time in order to maximize growth at this instant. In the first scenario the best you will ever achieve is what you’ve selected as your setpoint, but you’ll likely fall short of it. You may not care if your goals were high enough, “shoot for the moon, at least you’ll hit the stars… etc”. In the second case, you’re adaptively determining what the absolute best is, so “hakuna matata”. Regardless, having a bidding process driven by the farmers is not the way to make any of this all work. Instead, you would want to give the bidding power to the network. The network could have a range of “ask” prices for resources, and farmers would reactively bid to accept those prices for a certain amount of network time, or leave. In a sense this is a fuzzy “take it or leave” approach.

Not true. It is biomimetic and mathematic. Consider fibonacci’s rabbits, they are a perfect analogy for section splits. It’s just what happens when you have successive binary divisions with no loss. That’s why it’s considered optimal growth in living systems. A few billion years of evolution has shown fibonacci growth to be favored for the survival living things. No need to reinvent the wheel here for synthetic life, just include it as part of the design. From my perspective a target growth rate is how SAFE establishes its own environment. We know that network growth and size it critically important to the success of the network. Some security issues that would require a lot effort to mitigate in a small network become insignificant for a large network. Specifying a targeted network growth rate from the beginning is a simple way to give purpose to all the pricing algorithms that determine PUT and GET costs. A crude analogy is the cruise control in an automobile. You set the desired speed, and the throttle is increased or decreased to match the wind load or hills you encounter.

I think that as a general rule we need to pick the right battles and always give SAFE the high ground. For example, consider two options for a perpetual auction. A) the farmers bid to determine what the farming reward should be and safe needs to give them what they ask for, or B) SAFE decides a range of prices at different volumes and the farmers bid to accept one of those of leave. For option A, no solid constraints will protect you from edge cases. In contrast, option B keeps SAFE in control while also maximizing farmer participation beyond the non-fuzzy take it or leave scenario.

Yes, see above. Non-linear controls optimization, multi-objective optimization to maximize the current growth rate or other objectives, subject to the constraint that it cannot exceed a target growth rate etc. Possible to eliminate the constraint and let the network growth rate be unbounded, but might not be prudent…

No.

No. I just think a framework where the farmers have direct control over the pricing is not as beneficial to the network as one where the network directly controls the price.

None of those things matter with regard to the farming reward. The network can’t offer to go to one’s home and fix the computers or restore power (yet :wink: ). All it can do is raise the price it offers higher and higher to incentivize as much participation as possible. If those scenarios happen, farmers aren’t going to be sitting at their computers demanding more safecoin from the network before they come back online. They won’t be online, period. The network always has to be operating, waiting, keeping all the data safe and secure. Which is why it needs to be in direct control of pricing in coordination with all its other tasks, and the only farmer provided information it can really count on is resource availability - right now.

11 Likes

I think there’s some value to knowing why a node has departed. If the network is going to look after itself it could do that best with high quality communications from the participants. How that exact messaging is done, I dunno yet. Lots of options.

Should the network only value things it can measure?

This touches on a very important point - promises. Bitcoin promises digital scarcity (in this case 21M coins max but that’s just an implementation detail). Basically everything else in the design of bitcoin stems from the promise of digital scarcity. That’s their core unique offering. The implementation of difficulty adjustment periods, mining, block times, fee market etc all exist only because of the scarcity promise.

What promises should SAFE be making? To my thinking the key promise is Perpetual Data. That’s unique to SAFE. Nothing else offers that. So the economy should be designed to give confidence to that feature. This matters because a fixed growth rate of resources is probably a stronger promise for the goal of perpetual data than a variable growth rate. I think fixed growth rate probably gives sub-optimal growth, but it does increase confidence in the promise.

Digital scarcity is another promise being made by SAFE. Is there a potential conflict between these two promises? How can we address that? Who decides?

On the topic of PAC, the promises become … weaker? stronger? It’s a really hard question to answer.

I don’t use storj or IPFS because the promise of data retention is too weak. The growth of SAFE is going to be very strongly tied to the promises it chooses to make.

I think it’s a good idea for us (both sides of the debate) to establish

  • is fibonacci growth the right growth for SAFE?
  • would bidding evolve into fibonacci growth?
  • if bidding results in different growth why is that better or worse than fibonacci growth?

The simple argument I would start with is data is growing exponentially, not fibonacci. So why use fibonacci growth for the network?

Just testing the waters here, should people decide the growth rate or the network? Maybe another way to ask the same question is what’s more important, cheap abundant storage or a predictable growth rate?

What are the edge cases? Genuine question.

I feel a dystopia meme is needed here…

I don’t think having the network in control is necessarily better. If the world wants to migrate to SAFE asap the network should not be able to say ‘wait a sec’.

A fixed algorithm is necessarily exclusive rather than inclusive. I lean toward inclusive every time. Yeah we’ll have to include the malicious people but I accept that (kinda the point of SAFE isn’t it).

Which framework is more beneficial to the end users? A fixed algorithm or bidding? Really tough question I know, because it’s about security as well as growth, so maybe we should also explore how fast can the network grow before it becomes unsecure growth? Is slow growth more secure than fast growth? Is growth correlated to security at all? Why is fixed growth desired? This is a big zoom-out on the topic but I think it’s needed. Maybe I’ll expand on this later.

I don’t want to benefit the network, I want to benefit users. They feed into each other but in the end I have confidence that users are always in a better position to address their problems than the network is. Why do users start using the network in the first place? As a way to address their problems. The network is for the users, not the other way around.


Hopefully this is a coherent response but I’ll have a deeper think about it and come back to you with some more strongly distilled ideas :slight_smile:

6 Likes

Just a comment on this.
Ultimately we would still be in control because we update the s/w and can change the algo through updates.

Actually the network needs to be able to say “wait a sec” because too fast an influx could make the network unstable or attacked easier. I understand you were meaning something slightly different but yes the network needs a measure of control over rates of increase etc.

7 Likes

Growth curves are particularly well understood in biology population stats … and probably a lot of related fields. I wonder if it possible for the network to use some sort of curve fitting algorithm to determine aproximately where on a standard growth curve the network is, and then use that information as part of this farming reward determination mechanism?

As opposed to just using a fixed curve assumption.

E.g. taking the data up to “now” and then fitting it to a standard growth curve.

5 Likes

That information is being piped through the chosen economic model. So if you want the network to know the details of exactly why a node leaves, then you need to have a detailed pricing structure for each characteristic (storage, bandwidth, compute, etc.)

Two types of information are available to the network, explicit and implicit. The explicit is that which directly measurable or given a metric, or dictated in code. These are easy to work into a pricing structure. The second are implicit relationships or meta data inferred from the explicitly measured information. These are harder to quantify. I think it’s ok to say that if it can’t be numerically measured, then it doesn’t exist as far as the network is concerned. At least for starters…

Your analogy to Bitcoin was spot on here. The digital scarsity, the halvening, these are driving functions where Bitcoin has set the stage and created the environment within which market participants interact.

This is exactly what I was getting at, but you said it better. Picking a target growth rate is sub optimal, but it is transparent and easy for the layman to understand, like “the halvening”.

Weaker.

If you recall I said “pick your poison”. Fibonacci seams reasonable, but I’m not married to it. Continuous optimization for max growth is nice, but it needs to be properly constrained. Harder to explain and not as transparent for a promise. Could be with the right marketing. The problem is that it is completely unpredictable. Setting a target growth rate that the control algorithms continuously push and pull towards is more conservative and builds trust.

Again, pick your poison. (Linear, hyperbolic, geometric, Fibonacci, exponential etc.) An exponential growth rate has interesting properties as well, but could be a hard master to satisfy. I suspect (pure conjecture) the unconstrained optimization scenario I mentioned would probably end up giving exponential or double exponential growth… at least over short periods. The point of the argument is not what the target growth curve is, but that it might be good to design the economic model around one.

An analogy is like asking the question,'should we decide what speed to set the cruise control on when driving from point a to b down the highway?" If your answer is yes, then you have a very good idea how long it will take to reach your destination. My hypothesis is that a predictable growth rate will ensure cheap abundant storage.

  1. collusion to increase cost of storage above optimal.
  2. farmer or client stupidity/inability to consider all network responsibility and determine a GET or PUT pricing strategy to maintain the network.

The first might be overblown in this topic, the second not so much.

I disagree. The network has a responsibility to the data it contains first, and the whims of the world second. The network needs to be able to say ‘wait a sec’ if that is what it needs to do in order to keep all the data safe and secure.

If by users you mean clients, then a ‘fixed’ algorithm will offer the lowest PUT price.

That’s why you specify a target growth rate from the beginning to constrain the optimization.

No. I think you need a middle ground that is “more than fast enough”.

Yes, absolutely. Grow, fast enough, or die.

Promises.

Not really. It is a symbiotic relationship. Or more accurately described as a cybernetic symbiosis. Users and farmers are at different ends of the supply chain and the network is the middle man to end all middle men. The farmers feed the network with a flow of resources, the devs feed it capabilities, the producers feed it content. The network feeds those resources, capabilities, and content to the clients. Safecoin flows in reverse. The network is the market maker and taker and the only entity that observes both sides of all transactions, in addition to all other network conditions. For these reasons it needs to be the ultimate authority on PUT and GET prices to ensure it’s own survival in the market.

8 Likes

Sadly, I’m a bit busy these days and I don’t have time to read everything so I’m not sure if this was addressed yet, but my impression is that “bidding” as a category is a viable method when one side is buying something from another side.

In our case however, we’d have to trust a possibly incomparably stronger side with deciding how much “charity” to hand out, not even in exchange of something but as a nominal “thank you” for a GET that’s already fulfilled.

4 Likes

I’m a bit busy as well, have a few things to respond to in this topic :slight_smile:

But, I wanted to just say shortly, that “bidding” is not a term I consider 100% accurate for what we’re doing here. This is a new type of interaction, in a new type of environment. For that reason, I don’t think you can say it is more or less viable based on what it has been used for previously.
When repurposing something, or inventing, you just find the way that it could work in the new setting, and it becomes a new thing.

I would call this phenomenon more of an “estimation”. What is it that is being estimated? We are estimating what we believe to be everyone else’s belief on what everyone else believe. It is very similar to a Keynesian beauty contest.

And so, really, this is more of a contest, than an auction, and as you might well know, there are no limits to what games we can create.
I feel that kind of mindset is more powerful when we try find new ways in a new system and concept.

4 Likes

With bitcoin there’s a property of it being ‘only money’. So people who buy and sell on exchanges and never touch the blockchain are still ‘doing bitcoin’ (for their purposes). Bitcoin would be turning away unbelievable amounts of people if it wasn’t for the possibility of offchain activity.

But SAFE is not ‘only money’, it’s data, and we can’t really expect that to move offchain like with bitcoin. That’s the whole point of the network, to put everything onchain, and to really suck it all in, not leave any reason to stay on clearnet.

If growth is predefined (even flexibly predefined) there’s a chance SAFE will not become a storage layer but just a coordination layer. It will be too time consuming or expensive to get data on the network so people will use it mainly for coordinating direct data transfer between each other.

Like bitcoin hash rate? There’s no problem there, so why for us?

This is the bitcoin mining growth curve (source):

I think this curve is a) incredibly difficult to predict if you’re in 2009, both the shape and the magnitude and b) indicative of possible growth in SAFE (ie uploads, downloads, storage, bandwidth).

Right… we can’t know or agree on the growth beforehand, so let’s design around predetermined growth. Sounds a bit paradoxical.

My hypothesis is that a floating growth rate will ensure cheap abundant storage. We are at an impasse…


A controversial way of framing the network-as-an-actor is, what if MaidSafe prefarmed all 2^32 coins and handed them out in some specific way to network participants. Why is replacing ‘MaidSafe’ with ‘The Network’ a better result?

7 Likes