Dev payment rewards structure

I think it could work automated and similar to EOS. There are quite a few simulation tests Maidsafe have under their belt now and these could be used to spin up test networks or even on a live network either by a particular kind of vault or perhaps could be another persona of a vault (think I’ve heard the term sacrificial vault from David before) and if the core changes optimize above a certain benchmark? Then the code could be accepted and then adopted by a percentage of regular vaults over time? To be fair I think David Irvine had this idea before Dan Larimer (ps I’m not a fan of Dan). I think something like this could potentially work.

1 Like

The time aspect you bring up is interesting to consider.

Which type of contribution is more valuable?
A) new code with fancy features and higher performance, or
B) old code that has stood the test of time and shown to be secure, reliable, and decent.

Perhaps some concept analogous to how nodal aging is used for farmer rewards could be used to assess a dev’s code quality and performance, taking all factors into account. Farmers have “proof of resource”, maybe coders/devs would ascribe to a “proof of improvement” or “proof of safeness” type set of metrics. Not that I can offer a suggestion as to what those might be right now…

3 Likes

OK! Thanks, will have to search for those topics where chadrickm writes about it.

This CollectiveOne you linked, I couldn’t see anything in that source code that deal with/handles the areas I list. But checking through phone so wasn’t 100% thorough.
But on their site I saw a video showing the step when contributions are rewarded.
Basically, it was one person who just transferred tokens to a single person, or to a group which then vote among themselves how it should be distributed.
So that’s basically one of my naïve suggestions.
Right now, without even scratching the surface, it seems like a reasonable place to start.
But I wonder if it is a good enough solution to settle for to ensure sustainable development and maintenance of SAFENetwork source code.

1 Like

Let’s say MaidSafe Foundation - a charity - is the holder of the wallet to which 5% of farming rewards are sent.
They would then manage payouts to core contributors according to the distribution that they themselves have voted on.

Let’s say there are 500 contributing individuals. I doubt any single one will know about the work of all others, so they can’t possibly do any meaningful voting there.

Perhaps there should be groups formed according to the knowledge the coders have about each other’s work.

As a basic example, all contributors to routing lib would probably know at least something about each other’s work.

These groups are then allotted an amount from MS Foundation, to be distributed among them according to their own votes.

This is probably a feasible way to start it.
It’s heavily reliant on a central authority though.

1 Like

Yes. I think you’ve outlined a good baseline methodology. However it would be nice to also brainstorm other alternatives that require less human intervention…

I think this also brings up a needed app to help facilitate this, i.e. GitSafe or SafeHub. Aren’t we going to need a github clone built on Safe in order to help faciliate code commits? Maybe the SafeHub feature is more lower level than an app. I also could see it as a source code repo for vaults to pull updates from. Regardless, this would reduce the human intervention to merely accepting a commit. After that, any number of metrics could be used to deliver rewards based on the qualities of the particular commit and the trust level of the dev who submitted the code.

2 Likes

Is this so bad?

One could argue that the code is centralised too. Everyone is running the same code albeit maybe different versions/revisions. The code base is (currently) centralised on a (maybe distributed in future) “github” style of repository. Someone or group or “AI” has to approve changes to the code before the changes can be incorporated.

So in light of this is it so bad that a foundation (with advisers who could be core devs as well) determines the payments?

1 Like

In short term, definitely not bad. Because, in reality the whole development is centralised to one company, and they have maximum level of trust from community.

In long term, theoretically, anything can happen. What was once a healthy government can erode into corruption.

But the solution doesn’t have to be decentralisation the way we solve data storage in the network - even though it would be great if it was possible.
A democratic process can give it some robustness too.

Or there can be rigid standards within the foundation, as to how power is distributed and handed on.

The last suggestion is an old and proven way to handle it. But as we can see with Svenska Akademin (Swedish committee responsible for handing out Nobel Prize in literature, founded 1786) even these very rigid foundations can eventually crumble under corruption. (For those who don’t know, there’s been a lot of chaos there lately in the wake of #metoo among things).

5 Likes

Yes good points. It was an honest question I was asking.

Maybe a distributed group can be built up that has to come to agreement (not 100% but good majority) on what is accepted and what changes are not. It will be or should be possible to setup multi-sig wallet IDs so that a majority of sigs must agree before coin can be sent.

4 Likes

We just have to be conscious that any subjective evaluation of merit can never be objectively defined.

As humans will be the primary users and their values are subjective/different, we cannot hope to distil this into a check list which can be automated. Even if it were possible, it would be gamed to destruction.

While committees have many flaws, in some cases they are unavoidable. I think some evaluations of merit can follow a guide, but it must surely always require interpretation.

That’s a valid position, but no reason for us not to try and build something that works as well as a committee, but which can avoid the kind of problems where that can be ‘gamed’, or at least which no longer function equitably.

The problems we see in many areas of administration, governance and corporate operations are because the systems we build to deal with this kind of problem don’t scale well - as things grow, those within such systems lose touch with the people affected by them and those in control: committee, councillors, government, corporation end up with power they can’t see effects of their actions, let alone manage to obtain the desired outcomes. They may then descend into short term, selfish gain, nepotism etc because they can’t be held accountable.

What if the kind of ideas that gave rise to SAFE, and indeed to Project Decorum, can show that there are things that scale and retain accountability, or perhaps can work on a smaller scale without being taken over (ala Facebook, political parties, company directors etc).

One of the reasons things centralise is due to apathy, also lack of time. Not everyone is inclined to get involved in governance, but it seems we are willing to create and maintain social relationships, so if technology like Facebook can harness that for its purposes, it seems at least possible we can use it for governance such as this.

That is a long shot, but it feels right, and it’s a hell of an opportunity to improve how society functions. This would be a good use case to trial those ideas I think, though there are others.

5 Likes

It of course still boils down to how we approve changes to the core code. It is in this where we also can determine worth to the network done by the developer and her/his payment is determined.

2 Likes

Couldn’t we decouple these two operations? As a first step, a committee is solely responsive for approving code, but certain objective measures would determine the payment scheme automatically. It would seem that this would lessen the tendency towards corruption since the committee wouldn’t necessarily be able to determine the rewards that will be earned by a code commit due to the run-time implications.

This may sound far-fetched, but I still think with a little thought we could setup a system where the network would decide what code to approve, and how much to reward all on it’s own. It’s an autonomous network after all, right? :wink: Self adaptation/evolution anyone?

3 Likes

While the payment amount could be decoupled/automated, the actual fact of payment is dependent on the code being accepted into the code base.

But how does the “worth” of the code development be determined? Can this be automated early on? I expect this would require some sort of AI that has been educated by previous human decisions on the worth of the code changes.

For instance a one line code change could be trivial (say a UI change to input object position) - small payment OR it could be major if it fixed a very long standing intermittent bug in one of the protocols – larger payment.

How does an automated system work that out when the differences are not so obvious?

This is definitely a goal that David has expressed a few times now. It will not be easy and doubt it will be possible early on. The automated testing of a new version maybe a lot easier and that may provide input to what payment is due and this then may eventually help determine an automated system of code acceptance. But as always any automated code acceptance could allow a previously unseen attack vector to be introduced deliberately. At least having a group of approvers who do code review has a better chance of catching it.

We will see what develops in that area. At this stage I’d be saying that code approval has to be done by some (decentralised?) group early on and the payment determined from the worth of the changes approved. Now this could be a separate group that is informed by the testing/approving group.

2 Likes

TLDR;
This is the heart of the matter isn’t it? In order to do this somewhat objectively we would need to have a set of objective performance metrics and constraints. @mav has started to list some of these in another thread (Network Health Metrics - #7 by jlpell - Routing - Safe Dev Forum) After a single code commit, the network would need to run through a set of diagnostics to see if the metrics have improved. How one defines “improvement” with regard to multiple objectives is often defined by way of “pareto optimality”.

Earlier I mentioned “dev-age” as analogous to nodal-age for scaling a dev’s reward in analogous way to how farmers are rewarded for GETs (this is a bad idea if taken too literatlly, because it would place too much strain on dev’s to always be a dev’ing resource) ; instead a finer-grained approach could be used for the code commits themselves. In that case each commit to the code base would be analogous to a vault node joining the network and start out at a “codal-age” of 0. Any commits that lead to increased pareto optimality would slowly increase their code age, commits that hurt pareto-optimality would get rejected.

An “unstable” or “testing” sand-boxed portion of the network could be used to check these commits; users who want to help with this could install a “teacher” or “learner” persona. Perhaps a single commit needs to become an “elder code” in order to be considered stable, and when a certain number of new elders appear, a general update is made available for the human node operators if they want to update on the real network.

This still doesn’t necessarily solve the problem of time-delayed security flaws and other sneaky malicious code, or hard to spot security holes introduced unintentionally. In what I described above, these harder problems fall under the “constraint” category… as in there is a general constraint on code commits that “no code is allowed to insert a security flaw”. A first look at a commit by the Human committee could help filter these until the AI takes over :robot: . I also think there is a lot of general security tests that could be programmed in early on to help the human overseers, such as checks for no out of band communications, communication audits, and other generally recognized good security checks/practices that the humans are going to naturally check for. Having these specified for all to see may allow for them to be “gamed”, but maybe that’s where some randomization and indirection comes in, since all of the teachers/learners should be able to run a random version of the code yet nothing is allowed to break.

In principle, It would seem that we should be able to construct a methodology that evolves the code base in an analogous manner to how the network operates and grows, therefore keeping the source code “safe”. I see the SafeGit commit database as essentially analogous to datachains… with catastrophic code bugs caught at a late stage being analogous to a network reboot (but ideally caught early enough by the “teachers” so as not to require a non-analogous network reboot :wink: ).

2 Likes

I’m citing myself here.
We can have code that determines some things about what new code does. For example measuring performance improvement.
But that is a tiny fraction of all things that new code might be intended to do. Tiny tiny fraction.

Perhaps we’ll get there someday, but we’re pretty far from it currently.

So the mix you propose @jlpell, is what we have left, if the goal is to maximise automation and decentralisation.

The possibility to implement a smart system that reaches more or less the same goals, are still vast IMO.

Node age and so on, is actually a model of what we already do in groups and societies. The huge benefit with the nodes in SAFENetwork, what makes it so reliable, is that they don’t ever do anything else than that, and things can be constrained with math, cryptography. Humans have an endless repertoire and everything can happen. There is no math (not at that level…) keeping them within bounds.

But as long as the humans in a group are within bounds, they do the same type of work. As work is done, the node(person) proves its value, age increases and trust and responsibility increases.

So what we can do is to design the framework for this, and consider the implementation details something that can be switched out. There is probably a formal set of steps that can be abstracted, and then an implementation that is currently feasible could be used for each step.

There are so many options out there, that we must start by drawing up some boundaries so that we can get the space defined. That is tedious work, to start very high level, and extensively map out options - but not getting tempted to go too far into details - and level by level break down the problem and draw up the map of options on every branch. And as a candidate is found, run back up the branch and anchor it and include it in the set of preconditions for evaluating all the other levels. And so on, iteration by iteration, running up and down the branches and levels, until a final solution starts to become more and more concrete and solid.

The simplest version is quite easy to draw:

  • MaidSafe developers do all the network development for all time to come. This solution already has everything needed to ensure the quality of the network development.
    Current staff ensures that new staff is of right quality. The staff ensures that the code is of the right quality. And voilà, good code emerges. (Risks and downsides not mentioned)

The most wanted version is impossible (maybe always will be impossible):

  • A decentralised AI develops all new code.

So, the target is somewhere in between here it seems to me…

2 Likes

I would imagine this would be done as having a set of experienced “SAFEnet core” developers mentoring/supervising the lesser experienced developers. So its more organic and does not tie down Maidsafe to be this role. Also it can be decentralised and varying in size.

This mentoring group could be also approving the changes. So while David’s dream of a network that can test newly developed code and update itself autonomously is coming into being the mentor group can do the approving and guide the update of the network.

I do not think David was envisioning the network developing its own code at all ever.

3 Likes

I admire the depths you are digging into this, but I must pick on this example! :slight_smile:

If the code was not on a critical path, whether it runs fast or slow is largely academic. While it may be nice for it to run faster, does it deliver more value if it does? If so, by what proportion?

This problem has its roots in philosophy, not technology. We can’t use maths to make something which is subjective become objective.

Perhaps a committee (whatever shapes that takes) could highlight areas which need improvement and push them out to tender. This would include a reflection of relative value that the improvements would provide, compared to other areas.

AI may be able to act like a committee of humans at some point, but this remains some way off. Simulating human analysis and judgement of value is a very hard problem to solve - even humans don’t agree on the subjective.

1 Like

Perhaps a combination of technical advisors (senior developers, etc) and user/business advisors (product owners, stake holders, etc) can work together to define and rank features. Perhaps these could be put in a back log, which is pretty much what Agile development is all about. Put a price on story points and have multiple scrum/dev teams of 1:n in size (I.e. adhoc) and you have something that could work quite well.

2 Likes

I don’t know what I do to make it sound like I propose these things :smile: I am specifically saying that:

  1. AI written code is impossible (maybe always will be) at more than trivial level
  2. Judging code value with math is not invented
  3. Performance is a tiny tiny part of what you want when coding.
3 Likes

I’m not so sure of those statements. I think AI is beginning to make advances which automate the code creation to levels significantly beyond the tools we’re used to (which are generally machine code generators, libraries, design pattern implementors such as a file system/database, syntax checkers and so on).

One such advance I posted somewhere here coding up UI from wire frame mock-up. Another I saw recently suggests code changes to github repos to fix bugs based on changes its seen in other repos. Pieces of the puzzle will I think be the first step, and then as with Google translate, step changes will be made by fitting those pieces together. I don’t believe there’s anything special or magical about human intelligence, though I’m not saying AI is gong to match that any time soon. I just would not rule it out.

Even without that, evolutionary software techniques can be applied in ways that achieve the kind of adaptive improvements we might wish for.

Not saying this is imminent, and I’m aware there are some tricky problems in this area. For example, AI ‘cheating’ of reward functions is a classic, but I think as with other things SAFE we can look for inspiration in nature, and I do think that this is on David’s radar. He’s already coded up a genetic algorithm in Rust for example.

On the subject of AI though not for discussion on this topic, so reply elsewhere if necessary, I read this last night which is in line with my thinking, which is that once a certain point is reached we will not be in control, and will not be able to determine what AI does or even whether it just goes badly wrong and destroys itself while turning the entire universe into paper clips, or leaps off into a super intelligent dimension with goodness knows what consequences for life on earth.

Demis Hassabis, who runs Google’s DeepMind, once designed a video game called Evil Genius in which you kidnap and train scientists to create a doomsday machine so you can achieve world domination.

https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-aim

1 Like