Perpetual Auction Currency

Hey, @jlpell, I have gathered the arguments against the feasibility of collusion that were made in this topic. Did you want to formulate your opposition in the light of those as well, or shall I interpret the radio silence as a “not interested” :yum:? I’ll not bloat the topic with it in that case.

I’m working on my end to clarify the ramifications so that it is easier to reason about it.

1 Like

Always interested. Just need more time to pick go over the recent posts in order to provide some reasoned responses/ideas. :slight_smile:

2 Likes

Splendid! :relaxed:
We’re having guests tonight, but I’ll post tomorrow.

(Also I will answer a bunch of questions and comments that haven’t been answered in the topic yet.)

1 Like

Why collusion would be infeasible

Here below follows all comments and arguments from this topic so far, that in some way question or reason against the feasibility of collusion. There are no proofs here, and many assumptions, but at least these are perceptions based on various insights among people, formulated and backed up by a rudimentary logic, and they form a framework of reference for the subject, identifying possible areas where more convincing and solid substance might exist.

On the opposite side of the argumentation, where it is being said that collusion is certain to happen, I have seen very little (if any) of this sort of reasoning, to support the statement.

That is not an evidence for anything, but so far, it seems to me that no one really has that kind of supporting arguments (not even of the rudimentary kind we see below), since it has not been presented here.

So, for the pleasure of everyone, I present:

Comments from this topic

6 Likes

Comments to a bunch of things said in this topic - Part I

(I hit the 32000 character limit for the first time ever on this forum, so this had to be split in 2 parts)

About looking for a solution

I want to just mention again that I probably seem heavily pro-bidding and anti-storage throught the entire topic. I’m not really sure of any of these methods.

But it’s an approach that I adopt whenever I want to find something out: I delve deep, and I assume the conviction of someone who knows that it works, so that I can follow through as far as possible and find out everything possible about the subject. So, that’s why I’m like a wolverine on the subject and won’t let go. That’s why I get absolutely fundamental when arguing.

At some point though after some time, I will feel that I can’t get any further, I will back off, and just let go of that hell-bent conviction that there is a solution, and then I am open to start a new effort that could be in some entirely different, even opposite, direction.

Anyway, enough about that.

Here comes thoughts, comments and answers to a multitude of questions and comments from this topic.

Comments from the topic - Part I

I feel it is very much valid. Elders do extra work, it is something that costs more, if nothing else then by hogging CPU and memory that you could have used for something else. On the other hand, I wonder if it’s wise to make it something desirable? If there’s too much economic incentive to become an elder, will the assumptions on their honesty then be put under pressure? Just a thought that struck me, not in anyway convinced of it.

I think it is a good idea to use both carrot and stick. Getting zero reward is still a very mild stick, there could be worse punishments… (not that I think we would want that here).

If everyone is 0.001 % off, and one is 0.0011 % off, then it’s kind of dumb to punish that. So it seems it would be better to punish being worst in combination with being on the far end of the allowed range. So, if everyone is at the far end, then you’re probably not trolling. If only you are, then… the stick.

And I think as to make illicit manipulation expensive, it has to be a very harsh distribution. You can’t sit at the far end with your bids and rake in almost as much as those being close. The incentive mechanism must be clear and effective.

My first idea was that every subsequent position, would get 1 / 2^(n) of the reward where n = position.

And then the residual amount is given to first place.

So, 50 % for closest, then 25 % to second closest, then 12.5 % and so on.

It’s still blunt and unfair to get punished for being off by the minimum possible unit, so you end up second instead of first for example. So combining it with distance could make that less so. On the other hand, this gets repeated over many times, so any single turn out of the game is not very important. It is the average turn out over time that is important, and in that game you’re simply better off by being close.

This is a little bit off topic here, but I want to touch on it briefly anyway.

My very first impression of this, when you mentioned it to me, was that it is an absolutely amazing idea. I’ve always seen the future maintenance of the network, the upgrade process, as a bit… tricky. How can an autonomous network maintain the autonomy when it’s subject to a relatively very small group of people (ie. those who would be able to develop the upgrades) and their actions.

The argument is that only upgrades that a majority actually start using, would come into effect. So, that means that a majority of users, ie. Our Beloved Corrupt Humanity (OBCH), decides what upgrades this autonomous network will have.

That would still be the case if the changed functionality introduced by upgrades, would not come into effect immediately, but as a result of continuous bidding/voting from the network participants (ie. OBCH). First of all, the added options would only be available if OBCH accepts it. Then OBCH would vote on what ever options were available.

The risk I see is that it can become completely impossible to understand what the large scale effects could be for various small options added, when they are combined dynamically in new ways all the time. While that seems true, it’s possibly also true that the mere fact that OBCH can constantly influence the system rules, also gives them the power to heal the network in case any unintended consequences arose from some specific combination of rules that had been voted for.

OK. Moving on to something closer to this topic:

So, is the network really autonomous when OBCH decides its future? An upgrade, changing source code… that is like the most fundamental and potentially devastating influence OBCH can have.

How does this differ from bidding? That’s essentially the same thing, no? OBCH decides what source code it will use, and it decides for the network what the currency is worth. Both very powerful influences.

So, if we don’t trust OBCH to be able to do this, how can we even trust the network to be able to function and exist at all? I think it is shown here that the network has to rely on OBCH to collectively actually act in the best interest of the network. ( Until OBCH trains an AI that can remove that responsibility from them permanently ).

  • Is this reasoning something everyone can agree on, or does anyone have some different view that might show why this reasoning is not correct, or at least not the whole truth?

I have envisioned the vault operator software UI to be very simple and easy to understand by default. It shouldn’t take any prior knowledge or technical skill to be an operator. I think everyone agrees on this. And so, for your grandma, there would be no need to know about the bidding or care about it. There would be a default strategy used by the software.

For people like you and me, we might want to at least try the stuff out, check it out (but honestly, I would go for the default strategy, I don’t aim to be a professional market sentiment analyst and try make money on that, so I wouldn’t care about the bidding). So for us, there would be the possibility to just change the number in a field for example (the value for your bid), or pick from a set of strategies in a drop down list (maybe third party strategies loaded from the network).

Professional market sentiment analysts who try make money using their tools, education, skills (basically just working in their area of profession) would want to connect their custom software to the vault operator software, as to automate their analysis and bidding. I would envision that the vault operator software runs a local http server and exposes an API, so that anyone can use a custom UI and / or connect custom software for automated logic. Naturally maidsafe would have their default vault operator UI that any beginners would use.

The current design allows NB movement with as few as one active bidder in a section. With an expected median section size of 85, and a majority of sections needed for the NB to move as a whole in the network, that means that as few as (approximately) 0.5 * 1.2 % = 0.6 % of the population need to be active bidders (if they are evenly distributed). So that’s the theoretical required minimum.

In reality, we would want and need more, exactly how many there would be is hard to say. It would probably depend on how profitable it could be. So it has to be possible to make some profit being an active bidder, as to get a couple percent of the population to assume that role.

  • Could it be too profitable to be an active bidder? What consequences would it have?

Yeah probably. I would also assume a large (and perhaps constantly evolving) repository of free strategies to choose from.

I’m sorry I don’t quite understand the question. Would you like to see what a likely bid would be today?I’m not sure I get what the pool of bids is about. :thinking:

With the store cost algorithm proposed here, the storecost-paying-for-predicted-future-GETs, it is supposed that a balance in supply is reached, since the network read:write ratio is constantly taken into account when calculating store cost, and by that the two flows of currency - into network and out to farmers - become balanced.

A weight for how much available supply of currency there is, could be used to modify store cost. But since we want to modify the reward as well, we could modify both by placing this weight in the calculation of farming reward.

As an example:

The bidding is used to get the market sentiment on valuation. This value is then weighted by available currency supply to form the actual reward, so that the fewer coins there are, the lower the reward – and thus store cost - will be.

Important to note: Before the balance number for currency supply has been reached (say 50 % or 10 %, both levels have been proposed), there has to be a net farming for us to ever be able to reach that level of unfarmed (since we will start out on 85 %). That means store cost has to be heavily subsidised (I prefer this, since it is a super good motivator for getting a lot of data uploaded early on) or farming reward heavily increased (in relation to store cost). This is true regardless of what farming algo is used.

Either we want to reach the balance value as fast as possible, and then balance around that value. Or we want to gradually decrease the rate of approach to that number, so that we eventually end up there. The former would see a much faster rate of new coin issuance than the latter.

  • Does anyone have any compelling reasoning for either or of these two options?

Basically, we want to achieve a better reward .

So, what is a better reward than the current? I would say a better reward is one where we can be more certain that the reward will perpetually reflect the currency market value – as close as possible . I think this is one of the most obvious achievements of this proposed system.

The following are not that obvious that they have been achieved, but they are at least intended.

A better reward compared to current, is:

  • not hiding as much complexity
  • not as locked to assumptions done at design time
  • more resilient to unknown future changes
  • removes accidental complexity, such as piping of behaviour through storage
  • removes friction in reflection of market valuation
  • removes side effects of the economy implementation
  • a simpler and more efficient network management of the economy (less computation)
  • a more agile system, adapted to the broader aims of the network (computing etc.)
  • a currency that considers the entire value of the network, and not only the storage

Naturally, one of the most important and first things to have solid, is the security.

We believe we have come a good way in designing something resistant to illicit manipulation , but we are not yet there, as we still lack knowledge of the magnitude of resources required.

I don’t believe we can ever say that it is 100 % resistant to illicit manipulation , since the designers of SAFENetwork are very clear that it cannot be said of the network itself (which is a very sound conslusion IMO). We aim to be as good as, or better than the already existing protection. We don’t want to lower the bar.

Please, if you can point out where it has not yet been clear how any of the above is achieved, I’ll then focus on describing that.

I actually started making a contrast table, but I found it hard to be coherent and consistent in scope.

  • Does anyone have suggestions on how these systems can be contrasted in a table?

For one, this is not at all intended for Fleming. This is work initiated well ahead of a release, but it comes at this point since we are getting closer to tests using a network currency. I have wanted to work on this for years now, but it has not been close enough for it until now.

Since I believe (and also @dirvine has said) that it is one of the most important parts of the network, I think it is very much called for to spend a lot of time analysing and working with existing and alternative ideas.

I wouldn’t worry about pushing any dates back. MaidSafe is not going to pull in changes that are not necessary for Fleming. They have a perfectly clear view of what they need and need not do to release Fleming.

This work is actually done with the intention of saving MaidSafe a lot of time, by charting some territories ahead of time for them. If it turns out to be useful, well, that we don’t know yet, but the work will be there for them to use or not use.

Yeah, it’s quite cool IMO :joy:

I think perpetual is connecting to the whole idea of the network, and it is an auction, and a currency. It’s simple, there is none of that dated and now over used ”coin” and actually no reference to crypto currency at all, which I think is good.

In the future, no one will be talking about crypto currency. Just as we stopped talking about engine cars when we stopped using horse cars. It was by then just cars. Eventually we will stop talking about electric vehicles, and it will simply be vehicles, since there will be no ICE (internal combustion engine) vehicles…

I think with large and ambitious projects like these, it’s important to realise how universal and global and omnipresent it is supposed to be, and go to the basics with the language, not get locked up in some niche nomenclature that will later be irrelevant.

I have more ideas on this actually… unrelated to SAFE Economy. There is one thing in specific that I think can be very useful for the ecosystem. But I will need to have the expert opinions from the MaidSafe team to be able to evolve that idea, check its soundness and so on.

It’s something that will piggy back on the work already done by SAFENetwork sections. I think it has potential to expand the possibilities for an ecosystem around the network, that makes use of the value already generated by it, without need to re-write, re-implement or re-verify. Can’t say more now, so maybe doesn’t make much sense. But for sure, parsec (and the network) is amazing.

Yep, I think an AI developed to keep the network healthy would be the ultimate awesomeness. I’m pretty sure that will be possible one day. The complexity of StarCraft II and Google’s AlphaStar mastering it, is a truly fascinating chapter on the way there: AlphaStar: Mastering the real-time strategy game StarCraft II - Google DeepMind

I have played that game intensely myself many years ago, and absolutely loved it for the complexity of the game, so I can really appreciate how amazing it is what they have achieved there.

This is a very interesting idea, and others have been poking around the same thing here I believe.

There are so many levels that bidding / voting can be employed, and it all sort of connnects to the idea of upgrading the system. A perpetual upgrade where users are constantly voting on different available options for functionality. In absence of an AI that can manage the network functionality and its upgrades, I think it is a very interesting idea.

Yeah, the same thing can be said for taking over a section. There can be collusion there as well.

A section today is said to have 7 elders. These 7 elders could contact each other and say, Hey let's steal all safecoin in the section?. Once they decided, it would take no time to do it. No real cost. Big wins in a very short time. Now, the value of their safecoin is of course at risk if people lose trust in the network. But that also applies for collusion on the bidding.

(Actually, when I look at that, it seems it is easier for elders in a section colluding to steal all safecoin, than colluding to move the reward through bidding - since that takes many many more participants to succeed, for a much longer time.)

I think there are factors that prevent both from happening, very similar ones actually.

12 Likes

Comments to a bunch of things said in this topic - Part II

Yeah, I think the idea is not solid: this idea that people who are willing to cheat the system, would somehow be very loyal to the other cheaters when some opportunity for them to betray and make short time gains presents itself. If they are cheaters, I think it is very probable they will cheat the other cheaters, ie. defect the collusion attempt, when opportunity presents itself.

If you drum up a horde of cheaters, in a size that can actually threat the network, they are going to be one unruly pack I believe. It’s going to be mercenaries, you will need to pay them for them to be loyal to you. They are loyal to no-one and betrays you the first moment they think they can gain more somewhere else.

If you need to pay them, well then it is very very expensive.

This is an aspect that no one who says collusion is certain to happen and is non-trivial to achieve, has even tried to reason about.

Generally, I think that people who believe collusion is certain in this system, avoids the fact of the costs entirely , and just don’t want to nuance their assumption with what the costs might do to influence the probability of collusion.

Everything in this network depends on collusion being expensive. Nothing is 100 % secured from collusion. So that is the fundamental question for everything in SAFENetwork: How. Expensive. Is. It.

You cannot look at such a problem, without going into the details, charting all possible ways, and finding out what the magnitude of the costs really are, before making a statement either for or against the feasibility of collusion.

It’s just not a serious attempt to do meaningful work on the subject if you don’t go to the bottom with the costs. (The bottom being as far as one can go…)

That’s my opinion, it’s not serious, it’s just nay-saying or ego or knee-jerk reactions.

When I’m presented with facts, or sound reason at least, I take those and say THANK YOU, for providing me with a better view of the world. Even if those facts say that I am an idiot . (And that happens regularly, I’m sure I’ve not had the last of those world changing revelations until my last breath).

When there is not much attempt at reasoning, I generally find it to not be very constructive.

I think this is important as well.

I don’t believe this proposal to be more complicated to implement than a predesigned algorithm for the reward. In fact, I think it is simpler , because you don’t need to spend all that time in assuring the algo’s solidity. So that’s a big reason why I spend energy on this.

Of course there’s going to be a lot of time and work spent on any implementation for the SAFENetwork economy, but I do think there’s a good chance this one might be faster to implement, with a fair assumption of working well for an indeterminate time.

I’m not sure I’m fully understanding how you mean this to happen. I would be happy to know more so if you would feel like exploring this possibility further it would be great.

Just wondering to hear, if you meant the door was left open for collusion with this proposal, and if you still think it is?

I think on the contrary, this aspect was considered, and some basic preventive measurements proposed, while saying that more needs to be done. A complete 100 % proof of the utter unfeasibility of it has not been provided. Just some initial reasoning behind what the magnitude of effort and costs would be to successfully collude, merely scratching the surface of course. But, as I mention above, the same goes for collusion between elders in a section. There is currently no complete 100 % proof of the utter unfeasibility of it™. We don’t have the luxury to be able to provide such proof with regards to collusion protection, for anything in the network I believe. We can make valid or invalid, sound or unsound deductive arguments, as well as strong and weak inductive arguments for or against the feasability.

I would need to disagree that anyone has dismantled and found a loophole in this proposal, if that is what you meant. Were you perhaps impressed by the confident statements that “it will happen” and “99.9 percent sure” presented by some-would-say authorities (and that has not yet been backed up)?

I am interested to know how the public reads these conversations, and if they tend to base their belief in argument from authority ( argumentum ab auctoritate ) or deductive and inductive argumentation.

Maybe I misunderstood what you said entirely, I apologize if so.

I hope that anything that eventually gets implemented in SAFENetwork, is only based on solid and thorough argumentation, to the best of everyone’s effort.

From https://safenetwork.tech/safecoin/#how-will-farming-work:

When a user of the network requests some data, for example by browsing a website, a number of things happen: First, the client software makes a request for the required data chunks. This message (a GET request) is then propagated across the Network and when the chunk is found there is a competition between the Vaults in that Section to deliver it to the Network where it will be routed back to the requester. The first Vault to deliver will have a chance of being rewarded with one Safecoin. This is described as a Farming Attempt.

So, yes, there has for a long time been assumed that speed to deliver is an element in the farming reward.

I was before PAC playing around with another bidding idea, where the lowest bid would get highest reward. One very nice property of that which I think is possible would manifest, is that home vault operators are heavily favored. Why? Large scale operators will compete based on latency and bandwidth. They can out-compete home operators by being first to deliver.

But, if lowest bid receives highest reward, home operators can squeeze the large scale operators out. The large scale operators are commercial, and have to consider their operating costs. They need to offset them with the rewards, and thus require larger rewards than a home operator - at least over longer periods of time, as to not go bankrupt. The home operators are basically already paying their costs, and only (basically) need to pay slightly more in electricity bill for the additional hours they leave the computer on.

Thought it interesting to bring that up here as well, for the discussions.

I would love for it to be a way to favor home vault operators, via the design of the reward system.

I don’t want the commercial operators to be excluded or not viable, because it might be so that they are needed and would strengthen the network. But I want the decentralization of the network to be protected via the design.

Yes, this is an interesting thing actually. Anyone who participated early on, will have LOADS more currency than later participants. Their main interest is to keep the network healthy, keep the trust in the network and the currency, keep the value of their fortunes, and keep the rewards low . I think these people (probably a majority of the community today), will be a heavy force to count with in preventing illicit manipulation of the bids.

Probably. This was kind of the first idea that popped up for achieving the goal.

To clarify how it’s intended to work:

It’s for every bid. Considering that you can change your bid with every GET request you respond to, if NB is on the move, it can move quite a lot during a day.

So, if NB is not moving at the time, say it is at 64500 during the day. You will of course only be able to deviate 1% from the 64500. Since NB is not moving, every bid you make will be at most 1 % from 64500.

But let’s say there is a general shift in sentiment, so many active bidders are deviating from NB, and NB starts to move. Well, at maximum speed, every NB you receive will be 1 % from the previous (in the same direction every time naturally).

The more GETs / chunk there are in the network, the faster it can move. The fewer GETs the slower it can move.

(This is something important to consider, if it constitutes something desired, adding some positive properties to the system, or maybe the reverse? If negative, it could be an argument to attach bids to something else than the GET response.)

Yep, absolutely. When we were first writing on this proposal, we wanted to exemplify more of these, and then we decided to go for a short introduction with a very simple example, and to expand in subsequent posts.

Also, we both stated that there are many many ways to modify and improve these reward adjustments.

Interesting idea. We didn’t delve deeper into that. But IMO it is a fantastic subject to be thinking about, ie. voting / bidding on features that are already in the network.

Yeah I agree, there is so much that can be done this way.

One thing to be careful of though is what is available for voting. I mean, “kill the network” should not be an option of course, but there are less obvious things. It could become very complex in the end. Anyway, that is a bit off topic for now.

Yep, this was actually the initial idea, but for the first post a simpler example was used.

So, giving higher reward to bids below NB. Still rewarding close regardless if above or below, but preferring lower.

The one concern I haven’t yet fully worked through, is if this would give an unnatural drop to the bottom. It seems there would be forces to balance it up.

But everything that needs a balance becomes tricky if it isn’t self regulating, ie. if it depends on us hard coding some ratio or expecting some certain numbers. Because without knowing how much pressure there will be on either end, I think it’s quite unlikely to strike a perfect balance (when no other things considered of course) … or is it not?

But anyway, I think currently this is one of the best ways to put the greed at work on both the directions of pressure, thus keeping each other in check.

I think there are plenty of motivation to DDoS-ing elders as is, for mere obstruction purposes. I think that the effect of this actually speeding up NB is low, and then that the value of doing so is also low / hard to make use of. Any attempts to do that, it seems to me, would be dwarfed by the attempts to bring down the network, and so easily gobbled up by what ever the protection is that prevents that from happening. Don’t know, that’s my initial feeling.

@mav already responded to this, but I wanted to clarify.

I envision this as a simple configuration. At its most basic form it’s just an input field in the node operator UI.

If you leave it blank, it will use a default strategy (like for example, always use previous NB as bid, or always use the most recent average of some selection of nodes that I think are skilled active bidders). If you fill in a value, that’s your bid that is included every time your vault responds to a GET.

Now, active bidders would most likely not want to sit by the UI with an eye on some charts on another screen, and update that value manually. Very tedious.

So, for example, the node operator software from maidsafe, would run a small local http server, and your UI (from maidsafe, or someone else) operates against an exposed API. One of the endpoints would be /bid where you can POST a bid update.

So, if you are an active bidder, it’s quite likely you want to employ some software to make some analysis, and automatically POST bid updates to the vault software API.

The network is in control of the membership and the storage, right? But still, users are free to join and leave, and connect storage of this and that size. It’s because the network is managing the user input well.

Same thing could go for bids I argue, so that the collective wisdom is tapped, while still managed under the constraints of the network.

Do you agree that these are similar things, and that both could be or not be managed by the network, while still being user input?

I would be interested in hearing more of how you see this system, just curious to hear of variations, there might be interesting things we can use there.

The Farmer’s could collude and lie… part I think is not an absolute hinder, I think that is just the starting point for anything in an autonomous network. And then we apply the things that makes that unfeasible.

Yep, it is the basic concern I guess everyone has, me and @mav alike as we started working on this.

Many people so far have been suggesting a hybrid approach. I think that’s a very interesting path to explore.

I agree that being able to simulate it or reason about it is not necessary for it to work as intended.

I would wonder a couple of things:

  • Does Bitcoin work ‘as intended’?
  • If (when) SAFENetwork currency does not ‘work as intended’ will it give the same negative consequences as Bitcoin, or will there be other problems?
  • Can we know which these negative consequences will be?
  • Can we accept those negative consequences?

I think that if we believe that the system might not ‘work as intended’ when we are not able to simulate or reason about the SAFENetwork economy, and that ‘not work as intended’ might equal negative consequences (and we do not know what they will be), it is then a very risky way of designing a system.

We could use Bitcoin as a precedent. They didn’t get it to work as intended, and they didn’t anticipate the negative consequences, and it still exists, so therefore we don’t need to either.

Seems like a risky way of aiming to produce something that need to fulfil certain fundamentals, something that we want to be used worldwide.

If that’s theoretically the best we can do, then OK, I’ll bulge and not bother more with this topic.

  • Does everyone believe that to be the case?

I think this gives a slight benefit to larger vaults. A larger vault would probably receive a higher rate of GETs, and thus they have a better chance of adapting to market valuation changes by adjusting their bids – thus they should statistically have a better chance of being close to NB, and earn more.

I don’t know how it affects their chances of moving the NB, compared to a vault with lower bid update rate capabilities, so I’m not including that aspect in this consideration. (Anyone having a guess?)

I think it is important that at every step in designing the SAFENetwork economy, a slight tilt towards favoring smaller vaults is weaved in.

This specific benefit, might not be very important in isolation, but when all the different factors add up, the total effect could be very powerful, and we don’t want it to work against our fundamental aims for SAFENetwork.

It is possible that this is an argument to not use the GET responses as a vehicle for bidding.

But that could not be said for certain before all things has been considered in a final design, as it might be an OK compromise in the light of all aggregated factors.

13 Likes

I think that was the longest message I’ve ever read on here, wow. Maybe there’s a better format for these

7 Likes

All the efforts yourself and @mav put into this makes me wish you both were integrated with the CORE team and could be implementing this parallel to their existing works so when the time comes a few adjustments could be made depending on how it has to hook in or any dependency changes :laughing: , but alas you both probably have enough on your plates with the day job as do most of us devs that take an interest in the SAFE Network. Appreciate the ideas you are putting forth though!

11 Likes

Ultimately an RFC. I think this one is gonna need all the effort the guys are putting in for sure. Probably a big debate to iron it all out. I am still not 100% with it all myself yet, but I am only one person. I do feel we could test this though with not too much effort during beta, so have the network alone mechanism and then a user bidding or perhaps automated vault bidding, in any case it is a load of work and I could not be more grateful to see this effort. What a community

18 Likes

Like I said @mav way above my head, but maybe this helps a little.

The Ideal Auction - Numberphile - YouTube

Btw should bidding not come at an cost, I mean staking SAFEcoins?

4 Likes

We’ve been thinking of developing some sort of game / test for the bidding idea but not sure yet about the direction. If anyone has ideas about how to gather data or test the bidding idea I’d love to hear.

5 Likes

Reminds me of game theory competitions in my days playing with genetic algorithms (late 80’s). I never implemented this but they were interesting to learn about. (Generic Algorithms in Search Optimisation and Machine Learning, by Goldberg has a very good section on it).

People would have different hand crafted algorithms compete in a software environment, which naturally leads to automatic optimisation and evolutionary algorithm strategies. In a bidding scenario I think this is a very suitable approach as I imagine that is exactly what would happen. Things like GAs are clearly an interesting option to try, amongst other ideas as David has already mentioned, in order to create winning strategies, or to find weaknesses in the environment itself.

I’ve long been looking for an ideal scenario to apply GAs to, and this is certainly one.

So, create an environment which can run competing algorithms within a simulation of the network environment of competing vaults. Automatically vary/evolve strategies and have them compete for rewards, or to stress the environment using a mix of different bidding strategies and ways of varying bid strategy, including self optimisation.

In my day computation was a limit. I nearly got to play with a Connection Machines parallel supercomputer to try and optimise a tricky oil industry problem, but today obviously computation has moved on and it may be feasible to simulate this on a PC.

It’s a big field, which is why I’m not sure bidding is a simplification - although as the network is an emergent system I think it is going to be complex regardless of the reward system.

8 Likes

gotta give a huge kudos to @oetyng and @mav for developing and thinking so deeply about this idea! Like @happybeing it really makes me think of things I studied in my youth when I was just “doing what I love” but later was like omg why did I pay so much money to study something so useless. Now I am inspired to go water that withering bush and see if it will still produce fruit!

10 Likes

@oetyng and @mav, this is absolutely fascinating. Thanks for working through this as far as you have.

The potentials here are exciting. Again, it only becomes possible on this unique network/vault structure, so we’re definitely in terra incognita.

7 Likes

Hopefully this helps. If the Network is broadly distributed in terms of farmers at launch, then this sort of collusion is less of a concern. However, it is more likely that farming will be relatively concentrated at launch. For example, what percentage of the populace would know about and be ready to farm from day one. Think about the type of person who would be primed for this and have the resources (e.g. informational, financial, etc.) to readily participate. In this, first mover advantages would actually apply and carry weight.

Say farming is relatively concentrated at launch. If groups arose that provide a significant chunk of farming, they could enforce their own rules regarding bidding. They may, for example, seek to artificially keep the reward/price low so as to deter others from entering the market. This assumes that farming increases in efficiency at scale. Many individuals would either stay out of farming because of the price suppression or join these pools because it increases their likelihood of seeing some reward. This is what I mean by colluding to keep the size of the pie small in order to have a bigger slice.

Put another way, a group of HBS students were asked whether they’d rather live in a world where they earned $100K and everyone else earned $50K, or one in which they earned $250K and everyone else earned $200K. They choose to live in the first world because we perceive wealth in (erroneously) relative terms.

Since Safecoin can also be exchange traded, how do market forces impact this thinking? Could the market (I.e. mechanism of exchange) determine Safecoin’s price while the Network need only determine the conversion rate for purchasing resources and receiving rewards? In which case, the conversion rate would be dependent on supply/demand for Network resources, which in turn can be agnostic of market price.

Otherwise put, why can’t the Network simply control the exchange rate such that the reward/price for Network resources fluctuate based on supply/demand of said resources? In such a model, human intervention (I.e. bidding) is not necessary. This of course would require setting an initial exchange rate SAFE:PUT and laying down rules for understanding Network supply and demand. You’ve done some interesting thought experiments around that like Polls: How much will you spend? How much storage do you need? etc and Exploration of a live network economy.

The allocation of rewards for providing Network resources could either be fixed (x supply guarantees y reward) or probabilistic (x supply provides z probability of receiving y reward). Although this approach could still see concentration in farming supply due to sheer economies of scale, it at least would remove the ability of individual entities to directly manipulate reward value and allocation.

6 Likes

I perceive there is some reticence against introducing a human political influence into the network, and even if I understand the merits of the bitcoin like automated approach, I believe we still depend on the maidsafe’s judgement, as well as anyone building the future updates. Honestly, I believe that allowing the network to interact with the humans and merge both kinds of intelligences will create a system that is much more adaptable and future proof.

I find really interesting the idea of using GET events to express the opinion of the farmers on the network, be it the rewards or other matters.

This voting system could even be useful for updating the network or making some kind of gobernance layer.

I am in love with the Tezos upgrade by consensus system so I might be biased.

10 Likes

I suspect that there needs to be a network determined bounds (upper and lower) if there is to be bidding. What happens when the network comes close to the coin production limit? The bidders may have some indication of this if they monitor the global supply for sure and may bid accordingly, but those that don’t choose to track this may be taken advantage of - especially if the supply is pushed hard toward the ceiling.

Hence I think some sort of hybrid approach is needed - for the sake of giving the network the most information possible but also to conservatively manage the network.

3 Likes

Yeah, I think one interesting result of this is that your ability to participate in voting is increased the more popular data you hold.
So, the more data you hold, and the more popular it is, the more GETs you receive, and with every GET you are able to include your votes.

So, basically, the more valuable you are to the network, the more voting faster vote updates you get to do. Quite cool IMO. [had to edit that, to be more precise, it can be a very different thing]

Now, it is not entirely clear at the moment how valuable it is to have higher rates of voting. But one thing at least, is that you will be able to follow market sentiment better (less delay), that way having a better chance of being close to an NB when it arrives, thus getting higher rewards.


Reward distribution graph

I was playing around with using a Probability Density Function for reward distribution.
I made a simulation at Desmos that you can find here: PAC Bid reward | Desmos

The simulation allows you to loop through the size of a section (60-120 nodes) and watch their (somewhat) random bids and rewards plotted out as (x,y)-coordinates, with x-line being the bid, and the y-line being the reward.
Remember that the Neighbour Bid (NB) is what they want to get close to, and the NB is then split up according to the reward distribution (the sum of all rewards plotted, will be the NB).

There is a slider for the NB as well.

If you want to try a steeper or flatter distribution curve, go down to Probability Density Function folder, and adjust u with the slider.

There are a couple of other bid distributions that can be used as well, where the majority go above or below NB. The one that is used has a large part centered around NB. Still quite many out to the edges though. They all deviate at most + / - 10 % from NB.


Here are some notes from when I implemented it in code:

    // Sorting bids into exponentially differentiated buckets:
    // take diff between bid and NB
    // pipe through tanh (a zero centered "sigmoidal" function)
    // sort into buckets using PDF function
    // the bucket represents a share of the reward
    // every participant in the bucket splits the share between them

    // The aim of using bid-NB diffs is to equally favor closeness, regardless of sign.
    // The aim of piping through tanh is to map all possible bid-NB diffs into the PDF argument range.
    // The first aim of PDF is make reward proportional to closeness.
    // The second aim of PDF is to establish an exponential and continuous distribution of reward.
    // The aim of sharing in buckets is to keep bids from clustering.

    // The collective result of the above aims, are
    // - promotes keeping close to the common sentiment (favors passive bidders)
    // - promotes unique bids by decreasing the reward per bidder as bids cluster in buckets (favors active bidders) 
    // - promotes defectors when there is collusion
    // -- (ie. a close participant is rewarded most, when all the others are far away)

    // ***
    // Higher rewards give more participants
    // but skewing highest reward away from closeness, promotes bid movement - which eventually affects NB and through that attracts or repels participants.
    // So.. it seems skewing is just an indirect way of directly weighting reward?
    // The difference is that skewing promotes those who at that time are helping the network,
    // while directly adjusting rewards for all, relatively, rewards those who are less aligned with network needs.
    // The skewing does not impact the NB as fast as the weighting does.
    // So maybe the best result is achieved by combining reward weight with distribution skew, 
    // as to rapidly affect NB, as well as promote those who are aligned with network needs.
    // (Could the combination of the two reinforce the effects too much?)
    // The bucketing is more attenuated when NB is lower.
4 Likes

Wow, you’ve been busy @oetyng. A lot to go through here since my last post. A few comments/thoughts:

The network doesn’t need to know “why?”, it only needs to know whether the farmer resources (storage,bandwidth,latency, compute, elder counts, etc.) are increasing, decreasing, or constant/steady and what the current quantity is relative to system load or other targeted setpoints.

More is not necessarily better if it is just noise from farmers playing games. A “hard-coded” farming rate algorithm can be adaptive and flexible.

It might be fine to start with. In my view all major resource categories required to run the network should have their own reward rate. These include storage, bandwidth, latency, memory, and compute. In other words, if there is a resource proof for some farmer/vault performance trait, then the network should be offering a price for it.

True. Specifying a target growth rate from the beginning is the naive approach, but it offers a facade of predictability that is attractive to those in crypto space, and offers a simple way to motivate the network pricing algorithms. The optimal way is to have a means for objectively computing the current network growth rate, and then vary all inputs to the pricing function in real time in order to maximize growth at this instant. In the first scenario the best you will ever achieve is what you’ve selected as your setpoint, but you’ll likely fall short of it. You may not care if your goals were high enough, “shoot for the moon, at least you’ll hit the stars… etc”. In the second case, you’re adaptively determining what the absolute best is, so “hakuna matata”. Regardless, having a bidding process driven by the farmers is not the way to make any of this all work. Instead, you would want to give the bidding power to the network. The network could have a range of “ask” prices for resources, and farmers would reactively bid to accept those prices for a certain amount of network time, or leave. In a sense this is a fuzzy “take it or leave” approach.

Not true. It is biomimetic and mathematic. Consider fibonacci’s rabbits, they are a perfect analogy for section splits. It’s just what happens when you have successive binary divisions with no loss. That’s why it’s considered optimal growth in living systems. A few billion years of evolution has shown fibonacci growth to be favored for the survival living things. No need to reinvent the wheel here for synthetic life, just include it as part of the design. From my perspective a target growth rate is how SAFE establishes its own environment. We know that network growth and size it critically important to the success of the network. Some security issues that would require a lot effort to mitigate in a small network become insignificant for a large network. Specifying a targeted network growth rate from the beginning is a simple way to give purpose to all the pricing algorithms that determine PUT and GET costs. A crude analogy is the cruise control in an automobile. You set the desired speed, and the throttle is increased or decreased to match the wind load or hills you encounter.

I think that as a general rule we need to pick the right battles and always give SAFE the high ground. For example, consider two options for a perpetual auction. A) the farmers bid to determine what the farming reward should be and safe needs to give them what they ask for, or B) SAFE decides a range of prices at different volumes and the farmers bid to accept one of those of leave. For option A, no solid constraints will protect you from edge cases. In contrast, option B keeps SAFE in control while also maximizing farmer participation beyond the non-fuzzy take it or leave scenario.

Yes, see above. Non-linear controls optimization, multi-objective optimization to maximize the current growth rate or other objectives, subject to the constraint that it cannot exceed a target growth rate etc. Possible to eliminate the constraint and let the network growth rate be unbounded, but might not be prudent…

No.

No. I just think a framework where the farmers have direct control over the pricing is not as beneficial to the network as one where the network directly controls the price.

None of those things matter with regard to the farming reward. The network can’t offer to go to one’s home and fix the computers or restore power (yet :wink: ). All it can do is raise the price it offers higher and higher to incentivize as much participation as possible. If those scenarios happen, farmers aren’t going to be sitting at their computers demanding more safecoin from the network before they come back online. They won’t be online, period. The network always has to be operating, waiting, keeping all the data safe and secure. Which is why it needs to be in direct control of pricing in coordination with all its other tasks, and the only farmer provided information it can really count on is resource availability - right now.

11 Likes

I think there’s some value to knowing why a node has departed. If the network is going to look after itself it could do that best with high quality communications from the participants. How that exact messaging is done, I dunno yet. Lots of options.

Should the network only value things it can measure?

This touches on a very important point - promises. Bitcoin promises digital scarcity (in this case 21M coins max but that’s just an implementation detail). Basically everything else in the design of bitcoin stems from the promise of digital scarcity. That’s their core unique offering. The implementation of difficulty adjustment periods, mining, block times, fee market etc all exist only because of the scarcity promise.

What promises should SAFE be making? To my thinking the key promise is Perpetual Data. That’s unique to SAFE. Nothing else offers that. So the economy should be designed to give confidence to that feature. This matters because a fixed growth rate of resources is probably a stronger promise for the goal of perpetual data than a variable growth rate. I think fixed growth rate probably gives sub-optimal growth, but it does increase confidence in the promise.

Digital scarcity is another promise being made by SAFE. Is there a potential conflict between these two promises? How can we address that? Who decides?

On the topic of PAC, the promises become … weaker? stronger? It’s a really hard question to answer.

I don’t use storj or IPFS because the promise of data retention is too weak. The growth of SAFE is going to be very strongly tied to the promises it chooses to make.

I think it’s a good idea for us (both sides of the debate) to establish

  • is fibonacci growth the right growth for SAFE?
  • would bidding evolve into fibonacci growth?
  • if bidding results in different growth why is that better or worse than fibonacci growth?

The simple argument I would start with is data is growing exponentially, not fibonacci. So why use fibonacci growth for the network?

Just testing the waters here, should people decide the growth rate or the network? Maybe another way to ask the same question is what’s more important, cheap abundant storage or a predictable growth rate?

What are the edge cases? Genuine question.

I feel a dystopia meme is needed here…

I don’t think having the network in control is necessarily better. If the world wants to migrate to SAFE asap the network should not be able to say ‘wait a sec’.

A fixed algorithm is necessarily exclusive rather than inclusive. I lean toward inclusive every time. Yeah we’ll have to include the malicious people but I accept that (kinda the point of SAFE isn’t it).

Which framework is more beneficial to the end users? A fixed algorithm or bidding? Really tough question I know, because it’s about security as well as growth, so maybe we should also explore how fast can the network grow before it becomes unsecure growth? Is slow growth more secure than fast growth? Is growth correlated to security at all? Why is fixed growth desired? This is a big zoom-out on the topic but I think it’s needed. Maybe I’ll expand on this later.

I don’t want to benefit the network, I want to benefit users. They feed into each other but in the end I have confidence that users are always in a better position to address their problems than the network is. Why do users start using the network in the first place? As a way to address their problems. The network is for the users, not the other way around.


Hopefully this is a coherent response but I’ll have a deeper think about it and come back to you with some more strongly distilled ideas :slight_smile:

7 Likes