Hey folks. I have not looked into the code for the self authentication protocol but below is an excerpt from The Moral Character of Cryptographic Work. This sounds like a very interesting proposition and I am sure implementation of this methodology during the self authentication for key control it may be very useful in hindering key analysis.

Bigkey cryptographyLet me next describe some recent work by Mihir Bellare,

Daniel Kane, and me that we call bigkey cryptography. 134

The intent of bigkey cryptography is to allow cryptographic operations to

depend on enormous keys—megabytes to terabytes long. We want our keys so

long that it becomes infeasible for an adversary to exfiltrate them. Yet using

such a bigkey mustn’t make things slow. This implies that, with each use, only

a small fraction of the bigkey’s bits will be inspected.

The basic idea is not new: the concept is usually referred to as security in the

bounded-retrieval model. 135 But our emphasis is new: practical and general tools,with sharp, concrete bounds. We have no objection to using the random-oracle

model to achieve these ends.

Suppose you have a bigkey K. You want to use it for some protocol P that

has been designed to use a conventional-length key K. So choose a random

value R (maybe 256 bits) and hash it to get some number p of probes into the

bigkey:

i 1 = H(R, 1)

i 2 = H(R, 2)

…

i p = H(R, p) .

Each probe i j points into K: it’s a number between 1 and |K|. So you grab the p

bits at those locations and hash them, along with R, to get a derived key K:

K = H (R, K[i 1 ], . . . , K[i p ]) = XKEY(K, R) .

Where you would otherwise have used the protocol P with a shared key K, you

will now use P with a shared bigkey K, a freshly chosen R, this determining the

conventional key K = XKEY(K, R).

We show that derived-key K is indistinguishable from a uniformly random

key K even if the adversary gets R and can learn lots of information about the

bigkey K. The result is quantitative, measuring how good the derived key is as

a function of the length of the bigkey, the number of bits leaked from it, the

number of probes p, the length of R, and the number of random-oracle calls.

At the heart of this result is an information-theoretic question we call the

subkey-prediction problem. Imagine a random key K that an adversary can

export < |K| bits of information about. After that leakage, we select p random

locations into K, give those locations to the adversary, and ask the adversary

to predict those p bits. How well can it do?

It turns out that the adversary can do better than just recording bits of the

key K and hoping that lots of probes fall there. But it can’t do much better. Had

nothing been leaked to the adversary, = 0, then each probe would contribute

about one bit of entropy to the random variable the adversary must guess. But if,

say, half the key is leaked, ≤ |K|/2, each probe will now contribute about 0.156

bits of entropy. 136 The adversary’s chance of winning the subkey-prediction game

will be bounded by something that’s around 2 −0.156p . One needs about p = 820

probes for 128-bit security, or twice that for 256-bit security.

I think that the subkey prediction problem, and the key-encapsulation

algorithm based on it, will give rise to nice means for exfiltration-resistant

authenticated-encryption and pseudorandom generators. 137 In general, I see bigkey cryptography as one tool that cryptographers can contribute to make

mass surveillance harder.

Sorry if its off target but thought someone may find it interesting, even if the safe network is using other obfuscation methodologies.

The entire paper can be read at http://web.cs.ucdavis.edu/~rogaway/papers/moral-fn.pdf. This excerpt was from page 33 if you want to check the references.