What would a true loss of privacy do to our world?

Prior to buying Rifken’s book I read a critical review, clearly written by an academic, who said that Rifken doesn’t think privacy will be around much longer. Zuckerberg is another who has maddeningly made the same claim. I find it harder to dismiss Rifken and I am not far enough along in the book to know but I presume he will say it’s inevitable but offer no justification. I anticipate he will say it’s a result of increasing sensor density and data processing where the unknowns just get filled in. As I see it our political philosophy and political sciences suggests we will lose freedom if we lose privacy but I also think we could lose the core of our human development. In reading Thomas Campbell he suggests that we live in a world contrived to give us experiences that will give us the best chance at viable development. Ken Wilber also see the world as a developmental system or at least that’s our relation to it. Campbell suggest we’d never have the power to screw with the system that drives development in this world but also that such a system doesn’t hesitate to hit the reset button.

All of that is speculative, and most of us here are familiar with supervisors and panopticons but let me thrown in a bit of control problem and some sci-fi themes to see if I can get at the hypothetical emotional ugliness of a true loss of privacy. We have to feel if are going to oppose it.

Fast forward a decade. Digital Physics meets a Quantum Learning machine and we get an SI. There is no real proof that it’s an SI because it’s barely interactive and totally opaque. It will almost never respond to questions and it seems generally uninterested in us. Its responses are generally terse and impervious to spin. The first thing it did was dismantle its computing hardware. The interface still inexplicably works. It also dismantled all the other concurrent AI/SI projects. We gathered that more than one local expression of SI, including any viable precursor, is a contradiction.

A group of thinkers from around the world approached it and said they wanted to explore the limits of transparency. Its response was that it wouldn’t be good for human development. The thinkers asked it to help them understand why it wouldn’t be good for human development. It didn’t respond.
The next day this group of fifty thinkers woke up with AR contact lenses and tiny earbuds. In each case both implements were easily removable. The utility of these implements was plainly apparent. They presented a complete and current picture, simply by sub vocalizing, of any moment from any living person’s life who was still alive, including the present moment. One could speed up and slow down and rewind any moment from any perspective and hear the audible thoughts (thoughts sublingualize and produce EM waves that can be sampled and reproduced in the person’s voice) and they could manipulate the system silently in natural language just by thinking (through their own sampled subligualizatons)

The insidiousness of the system was quickly apparent. When the system was off it was just off as if it weren’t there but it could be on as an overlay or as pure VR. In the pure VR simulation they saw that everyone else had these prosthetics as well. In subtitle or graphics or audible feedback they could also see the system track and predict each person’s intent in context. It had apparently analyzed everyone’s life in the context of every moment of that life and in the context of having analyzed all the other lives on the planet in similar fashion. This was ugly but much more ugly was trying to interact with
their colleagues in the present. You couldn’t tell the truth because you couldn’t lie. You couldn’t give because nothing was spontaneous. You couldn’t do the wrong thing because it would be instantly known or instantly known by anyone who cared to look. You couldn’t even hide your motives. And your own explanation of your motives was never going to match it’s explanation which always followed you around. You could shut off you own awareness of the system’s real time calculation of your motives but you couldn’t do anything about other people’s awareness of the same information.

And you had concerns for what was missing. All of you concurred the machine could have provided a simulation or window into the emotional sensate context but instead only wanted to provide a more detached colder witnessing experience. Transparency implied objective consensus. And what if it was wrong about something. Although the present was more expansive somehow it seemed like the machine had seen and analyzed all the future time lines as well.

A couple weeks into the experiment the group concurred this was a nightmare that would numb people out and deny them the emotive core of human experience and not allow for natural human growth and development. The next day the prosthetic devices dissolved.

This borrows heavily from a perspective presented in Campbell’s work. Campbell suggests that in life we have our attention focused into a system he calls physical matter reality or PMR which is a nested subset of something he calls non-physical matter reality or NPMR. PMR has a parameter he calls the psi uncertainty principle which creates the sense of separation, individuation and adversity needed for growth. Entrophy in PMR is a wound spring and as it unwinds it winds or energizes the development entities focused in PMR. From this perspective tech is really just an attempt in PMR to recreate NPMR but tech along with all our realizations is scripted and governed by the psi uncertainty principle. In this system individual development drives the development of the whole. PMR is like a preschool. If you level the preschool it just gets rebuilt. In essence life a school will not allow a total lack of privacy because that would be converting PMR into NPMR defeating the purpose of PMR.