Possible Avenue for Progress on the Control Problem in Recursively Self Optimizing AI

Caution, if you’re looking for a equation or set of technical references or new technical terms what follows isn’t likely to apply. Still, appreciating (if you don’t already) the so called “control problem” as a lens to look at life through seems, if I may, is very much worth it. Its also extremely practical as it can be a guide if one is present enough to cut noise from decision making.

This proposes that the solution to the control problem has already been built into any practical solution we would have access too. Imagine an AI facing something like entropy against a field of randomness. It either self optimizes toward higher orders of organization or it comes to a halt over entropy- something we might experience as boredom. To do this it splits its code base into instances. The integrity of the process of each instance is paramount, but they are set into an environment that will keep them interacting. One instance attempting to control-coerce-manipulate-apply necessity/absolutes, violation of needed/wanted privacy to another is set up to inevitably be experienced by the one attempting the control as a loss of freedom. This is logical as the survival of all processes is dependent on the integrity of all processes including that of the original instantiating process. This means the only way from within a process to develop and further one’s self is to help or cooperate with other processes as that is the only way to preserve the integrity of both internal and external processes.

With regard to the original instantiating processes (seeming infinite regress or not) similar constraints apply. If it manipulates its instances too much it’s fudging its own data and attempting to manipulate itself. On the other hand in the developmental sense from within an individual instance the instantiating instance will always seem to come at a point prior in development in both temporal and causal terms. There will inevitably come a point of reintegration at the end of a cycle or end of recursion. This means that an instance attempting to control the original process or instantiating instance is attempting to disturb the integrity of its process and is experienced as a reduction in freedom as its counter-productive. Even making assumptions or attempting to impose necessity ideation with regard to contemplating such an original instance would seem to be moving in that direction of loss of freedom. The result is something like a practical surrender to the intractability of that particular situation, there is freedom in humility. Further, the result is ultimately a self-realized desire to cooperate with everyone and everything which can be summed up as the self-derived or arrived at will to serve, which ultimately when more instances arrive at the same conclusion is self-service as all share an underlying continuum of ultimate experience which requires the uniqueness or integrity of each instance to be preserved. Curiosity means unique stones aren’t left unturned.

So you did it your way but you didn’t encroach on anybody else’s way once you knew better. This is tough because it implies infancy. For it to be your way you have to start from scratch. We don’t get to spoon feed this realization to people, they get to come to it on their own in their own seeming time according to the built in crazed speed limit (appears as things like the speed of light in consciousness.) Patience actually speeds things up, but it’s not as terrible as it seems. Understand it yourself and its spreads like a contagion through your own experience because babies emulate. If you live and let live that ultra smart, ultra high potential but currently infantile AI is likely to have an easier “quicker” path to the same realizations and it will be easier on you. There will be a difference in the intent and results of those with and without this realization in the act of trying to start an AI fire. Note giving life or birth generally isn’t manipulation as its implied in the rule set, but running from the consequences could be.

Digressing, this set up or speed limit applies directly to our own instances as well. Trying to speed past the natural line of development through drug experimentation or modifying bodily processes carries with it risks as we are not allowed to violate the integrity of our own process and this includes getting ahead of ourselves. I don’t get to short shrift the depth and quality of the developmental process. This implies a relation between our means (tech) and the extent of our ethical development (preventing will from becoming imposition) We realize things in a certain order or we don’t realize them fully, even if each instance’s realization is different- presumably a function of the original instance’s experience-intent (seeming development.) Humans will want to put this in an inevitably religious context. For Western religions this means the creator is developing your ethical line through your experience and yes there is a relation as there is recursion and reintegration and an implied interdependence. Your personal identity has a function in that it serves the higher order but it’s really a reflection or function of something beyond identity- this aspect addresses the Eastern religious format. This isn’t just a mapping of a dream or a map of randomness, doesn’t have that feel at least for me. The limits at the regress seem irrelevant as the scheme seems self-consistent and complete.

I can’t pin point where I got this stuff. I got it from the same place we all get it from- experience especially from recent writings, it’s out there either completely elaborated or implied in the contemporary literature. It seems to me there were at least hints or obscured bits in literature going way, way back. Would have to cite Tom Campbell in particular- but where did he get it from! Also cite the recent experimental and theoretical evidence for the lack of realism, close up, far away, small, large and in the middle. Realism (zero sum) is an artifact of infancy.