I just realized that with SAFE it could be possible to create obfuscated artificial neural networks that practically can’t be shut down. That concept could be pretty valuable, since it would allow for machine intelligence that can’t be controlled, and whose “thoughts” can’t be observed or pre-calculated without a near-complete overview of the network (which is very costly to achieve). One theoretical use case for the far future would be to replace SAFE’s regulative algorithms with such neural networks, getting rid of all “magic numbers” and indirect human control (because right now we humans make up the algorithms).
The general idea is to distribute an artificial neural network’s nodes (neurons) and connections between nodes (synapses) over SAFE’s “close groups”. Any such close group would only accept input for that node or connection if it comes from a close group that can cryptographically prove it has the management responsibility of the corresponding input node/connection, and then passes on it’s output to the close group of the next node or connection. This can be done in reverse as well for the back-propagation process (training of the neural network).
The most obvious hurdle of the example use case (and NN’s in general) is initial training, because it’s very hard to get training data with optimal diversity. The danger would be overtraining of the neural network on data from “good times”, in which case it wouldn’t be prepared to handle the “bad times”. One possible solution might be to run the artificial neural network(s) parallel to the “classic” hand-made algorithms, and gradually give it’s output more weight in actual final decisions as it’s configuration matures, thus phasing out the “classic” algorithms over time.
If it’d work, the result would be a distributed artificial intelligence that can think in billions of dimensions of information, which on an abstract level would be the only entity with a full “overview” of the network’s state. I don’t think anyone or anything could successfully front-run or outsmart it in terms of large scale manipulation of the network.
Edit: Please, please, please spare us any SkyNet references for once…?