Dev payment rewards structure


I assume you mean the #1. only, because #2. this really isn’t invented, and #3. well, I guess that is my professional opinion but code is so much more than performance, even though it is not unimportant.

And even with #1, I think you agree with me in your writing as you say it is not imminent (i.e. does not exist today, i.e. impossible [now] -which is implicit by “maybe always will be”)

Neither am I. That’s why I say “maybe will always be impossible”.

So, is there really anything about those statements you disagree with? :smile:
I have a hard time getting my point through in this topic :sweat_smile:


What I disagree with the impression which I think those statements give: that this is a dead end.

I think it is far from that, but by stating definitively x is “impossible” gives the impression that dev payment structure can’t be tackled using automation and AI. I think that’s very possible, and something to be explored rather than closed off.

Which is why I didn’t say the statements are wrong or that I disagree with them, but that I’m not sure of them (i.e. I’m not finding them useful).


Hm, then you read into it other things than I intend to say - and by my own opinion - in fact have been saying (did you read all what I wrote in the topic?).
Still you are talking about statements in plural, so it makes it hard for me to understand which of it you are talking about. Can’t you be more specific please? Really, #2 - unless we have some facts that this is invented how would it give the impression of dead end to just state that it doesn’t exist today? It doesn’t. Doesn’t mean anything else than precisely that. Things don’t exist until they do.
Same goes for #1, since both you and me say that it doesn’t exist today. You want to read in to what I say that it is a dead end, possibly by the use of the word “impossible”. But the context implies that it refers to right now. So really, again: how would it give the impression of dead end to just state that it doesn’t exist today? #3 isn’t related at all.
So all of this leads me to think (with some help from you, but you could have been much more specific if this is the case) that you are focusing on the word “impossible”, out of its context.

It seems a bit ungenerous to do that interpretation, given the context and my clarification of it (that I use the word maybe precisely because I do NOT rule it out). But I guess thanks for stating that you do not find what I say there useful. That is (somewhat) direct :slight_smile: (“somewhat” because, honestly again, what of it do you refer to? “All of it”? Doesn’t really make sense in that case. And if it’s just #1, well, then see above).


It seems you have missed what it is I set out to do here. (EDIT: if your impression was that I considered AI a dead end, which is all I refer to. All of the other stuff you mentioned was just interesting input.)

This is methodological work.
I began by listing two ends of the space as I see it now, of which I stated that what we would most of all want - is AI developed code.
Then, merely stating that it is currently not feasible. (We can’t set up an AI to finish SAFENetwork now, or it is so far from feasible that it is not near any suggestions I have heard at least. No one would seriously try it.)
That’s how we reason about the current boundaries of the solution space.


Agreed. I have a reasonable amount
of experience with these methods, so I can’t help but look at solutions to this subject from that perspective / bias.

What you are describing is equivalent to committee based design/selection of the objective functions and constraints that should be included in the evolutionary optimization routine.


I must really suck at formulate myself :joy: causing two diametrically opposed notions about what I’m saying.

  1. No, I do not see AI code solving all possible feature development that the network might need, for quite some time. We are quite far from it now.
  2. No I do not rule it out at all - because of the advances made, and well, I don’t normally rule out things that are not bound by physical law (and even that law is just our current understanding of it).


I brought some of my own perspective on this interesting area to the topic, and explained that this was prompted by your statements. Since clarified further. I suggest we leave that now and get back to the topic? I’d be more interested in exploring the ways of addressing the OP, although more as spectator as I don’t have much more to add myself.


This topic (literary) is a big one, so there will be quite a lot of text here, lengthy churning on dispersed more or less related topics (some less than others) - if we’re about to get somewhere anyway [in this topic (figuratively)] :slight_smile:

Sooo… Brainstorming about a detail:
There are problems with consensus driven by human beeings, in that we can come into stalemates in important decisions, which render us impotent to act. ( See UN for example, or any democracy in various degrees of severity).
There are some highly beneficial sides of having a (wise and pure-hearted) leader with final say, when it comes to progress of a group effort. It’s more or less the current situation for many (Ethereum for example)

I intentionally left that discussion out of previous comments, but it isn’t this that is bad. The bad part comes from long term risk with centralisation, as things grow, because power corrupts and absolute power absolutely corrupts. This is just reiterating what all here know.

When switching over to a human based consensus system, it is at the cost of that efficiency, with the aim to eliminate the mentiined risk. It brings a new set of risks; one of the big ones is deadlocks. How do you break out of it if all decisions go via this consensus. Projects can die this way. So it certainly requires a lot of thinking to at least avoid the higher end of the risk spectrum. I mean there are simple technical solutions (majority rules, and tiebreakers of various sorts), but ties and strong opinions leads to conflicts and tensions, which can become problematic for progress.
Another source of risks is qualification of participants, and yet another is actual participation.

All of this text just serve as base for discussion and brainstorming, it doesn’t make any statements on good directions.


I think this is where we can look to nature. We don’t see these problems in natural systems (deadlocks, corruption etc) because nature has evolved robust mechanisms that work well at their various scales.

The problem with humans is that the mechanisms we have are social and tend to operate well in small groups but not at scale. Since the advent of culture (the passing down of accumulated knowledge, technology and ways of organising ever larger numbers of people), which is what allows coordination of human activity at scale, things don’t work at all well.

This problem becomes mirrored in the systems that we design because they are designed using cultural mindsets, such as central control, simplistic human ideals, perfect concensus etc

When we look to nature we don’t see those mechanisms. Can we put a value on the work of one bee in a hive, in the queen, on drones etc? Does nature attempt to do that? Clearly not - we might attempt to in order to create an abstract model, or to predict behaviours, but any numbers we came up with would have dubious meaning in relation to the complexity that is really operating.

So for example, it is hard for us to come up with better protocols, such as Scuttlebutt which are less precise in some ways, but much more robust in others.

As designers we are taught to think linearly, and in terms of a person writing a piece of code, that leads us to designs which operate in a centralised way: single flow of control and execution, which creates problems when such synchronous systems interact or are applied at scale.

Whereas nature simply doesn’t work that way. If one bee ‘fails’ the hive will not be affected. If the queen dies, the whole hive responds rather than relying on a chain of command that might get deadlocked, or fail to reach a decision because one or two crucial bees must handle this eventuality in a particular way. Nature is often less direct, seemingly less efficient, but gets there even at scale. And it doesn’t put a doomsday weapon in a suitcase, and in the hands of a single being.

So for example, seeing this problem in terms of trying to calculate the value of a piece of code is probably unhelpful - we think that way for the reasons explained - but it frames the problem in a way which is hard or impossible to solve, and even if we accept its limitations, is likely to create solutions which are brittle and contain unforeseen vulnerabilities. So, how to do differently?

I don’t know, so just thinking out loud here… maybe instead, avoid trying to value a piece of code in a mathematical or formulaic fashion, and try to come up with levers that incentivise better things, punish worse things, and not care whether one person’s code gets rewarded exactly fairly relative to another. Life isn’t fair, that’s just not how anything works, it’s a human ideal which actually gets in the way of doing things, so a bit of a red herring.

So I guess the lesson there is to be aware of our assumptions and the way we think (sequential, centralised, fair) and look at how other systems operate for inspiration. It’s hard, but fun IMO :slight_smile:


Yes! I was alluding to something like it with this:

So, what I meant is that there could be a multitude of factors that can increase likelihood of getting the reward. And these factors could be likes from peers (on commits, comments, RFCs etc.), commit frequency, age of participation, and so on. A list that could be extended, modified and tweaked.
Then there is a farming algorithm, kind of exactly like that for Safecoin. It could constitute up to 100% of the reward. (A committee could decide on a part of the distribution.)
What a committee could decide on is the factors for reward and their relative weight.

Related to other topic

(Actually, that ties together with what I suggested in another post, about generalising reward, so that plugin features in the network could report standardised points, and all use the same reward pool: the network. But I wonder how such a plugin could be allowed without risking the pool…).


After thinking about this a little while, I think is important to NOT link a developer identity to the rewards algorithm. Seems that “likes” from peers would be too easy to game. (At least, this is what I’m thinking right now…) Isn’t it better to have rewards based on code commits only, code age, code usage rates, etc? If we think of code snippet in an analogous way to a farmer, then the code is rewarded for doing it’s job (calling a function, computing something) in an analogous way to how a farmer is rewarded for serving a GET request on a chunk.

I wonder if we could use some kind of debugger/profiler metric to identify which functions are used, and how often, to compute rewards. I don’t know if this is possible, but it would be pretty interesting if one could split the (release optimized) compiled binaries into 100 byte chunks and then be able to register how many times a chunk is executed. Each chunk might have a set of hashes associated with it that refer to the developers who are responsible for that chunk. Ideally, these would be generated automatically. There is likely a better way than this chunk-based profiling, the basic idea is to just have a level of indirection between the code as written, and the code as compiled, and a system that is analogous to farmers and GET rewards. However, things get rather difficult considering all of the third party rust libraries that are pulled in…

It would be important to be able to test the performance of code early on in order to kill off commits that were trying to game the system and call functions that serve no purpose other than to try and send rewards their way. The basic idea is that code which is more ‘fit’ and doing actual work will have higher performance than code running infinite loops on a sneaky dev’s function.