Archive

Archive for April 28th, 2010

On the Beale Cipher, Part II (and Other Book Ciphers)

April 28th, 2010 No comments

Last time I talked about extending the usual frequency analysis on the (first) Beale cipher to augment our understanding of the composition of the individual letters the numbers may represent.  I have said before that Markov chains are immensely applicable everywhere; the Beale cipher seems no exception.  The idea came to me last month, a couple years after reading Singh's book, and while I was happily lazy watching a couple seagulls fish out of Manzanillo's ocean.  I had also read Snell's book and a particular description of how Markov himself thought about Markov chains intrigued me: apparently he had counted the transitions of vowels to consonants and consonants to vowels in a book, thought out his theory, and showed that the long-term fractions of vowels and consonants in the book (using a chain) stabilized to the actual ratio.

This same idea can be applied, I think, on the Beale cipher.  Suppose we know a priori how transitions between the encoding letters of the key, which we surmise is the first letter.  Say for example the key is the Declaration of Independence (which in fact is the key for Beale cipher 2), as

"When in the Course of human events it becomes necessary..." and each first letter encodes for a particular number.   We can see that W transitions to I, I to T, T to C, and so:

W->I->T->C->O->H->E->  etc.  By finding the proportion of time any letter, say, W, transitions to any of the other letters, we've got ourselves a transition probability matrix.  In this abbreviated example we see that W transitions to I one hundred percent of the time; its transition probability vector would be represented by a 1 in the position of the letter I and 0 otherwise.  Thus, in effect, we are assuming that a random variable can take states represented by the letters of the alphabet, and it can transition to any other letter or stay where it is with a given probability.

Of course, the longer the encoding key is the better; most of the 26x26 transition probability matrix can be filled without a row of zeros, which is in effect what happens with the letter A above, since we count no letter toward which it transitions.  In counting the whole of the transitions in the Declaration of Independence, the only letters that don't transition are X, Y, Z; to bypass the difficulty of a transition probability matrix whose rows do not sum to 1, I have made it so that X, Y, and Z's rows do sum to 1 by assuming that the states X, Y, and Z transition to any other letter of the alphabet in equal proportion.

Writing the Declaration of Independence transition probability matrix thusly, it becomes a regular transition probability matrix with stable nth power.  In other words, we can take powers of this matrix up to an arbitrary number, such as 3000, 5000, without the fear of it diverging or giving weird numbers.  In fact, in particular, the Declaration of Independence transition probability matrix (DOITPM for short) stabilizes to the fourth decimal digit after about 15 powers.  The reason we care about the powers of the TPM is that such new matrix represents the probability of transitioning to a particular letter after n transitions!  Thus, where the DOITPM alone represents the probability of, say, W transitioning to I at the first step (the next cipher number down), DOITPM to the 3rd power represents the probability of (in particular) W jumping to another letter after three steps (the next three cipher numbers down).

Of course, calculating the DOITPM to any power is taxing or impossible if done by hand: I downloaded Octave (which I earnestly recommend if, like me, you don't have the 5000+ dollars for a Matlab license) and built a few script files for the purpose of doing all this.

The very cool thing about this approach is that virtually all points 1-6 that I commented on in my last post are taken care of.  But in particular, if one has a prior belief about the probability that a given number in the cipher is a particular letter (a probability vector), such can be propagated forward by using the TPM and its powers.  As an example, we may have the prior belief that the cipher's 1 is a first letter with probability vector equal to the frequencies of first letters in the English language.  Such a vector (P) times the DOITPM gives us a probability vector for the cipher's 2 (one-step forward), P*DOITPM^2 power gives the probability vector of the cipher's 3 (two steps forward), etc.

On the other hand, we may surmise that, say, the cipher's 64 has the standard frequencies of letters of the English language.  We can propagate such belief forward to 65, 66, 67, etc., until about 15 letters after our original (since the DOITPM stabilizes and all steps after about the fifteenth are the same).

We need not use the above frequencies, although they do seem reasonable "first-guess" beliefs.  But if we suspect for any reason that a particular letter, say T, is the cipher's 1, we can propagate such belief forward and get good results for cipher's 2, 3, and even 4.  After 4 things begin to stabilize toward the long-term proportions, and this is only natural as our certainty of the next letters increases.  If we are good at crosswords and a little bit lucky, we can determine that 2 is a particular letter.  We can then modify the probability vector of such and propagate such belief forward... and so on until the whole text is deciphered.

We can also propagate partial beliefs: if we suspect the cipher's 1 is either a T or a W with equal probability, but there is a slight chance it can be any other letter, our probability vector for that number can be something like 0.004 for all letters except T and W which are 0.45.  As always this belief can be propagated forward and with luck we can determine more letters based on frequency guesses.

Computationally, in the very inefficient script files I have built (I freely admit I am no programmer, and I wrote the script files somewhat quickly in order that the ideas not die before they could see the light of day, so there are a lot of unnecessary steps and redundancies) and my 2 core, it takes something like 20 minutes to process all the cipher and propagate all number's initial probability vectors by proceeding in layers.  An example of an inefficiency is that, despite the fact the DOITPM stabilizes at about the 15th power, I got ambitious to squeeze even the slightest changes and am calculating it sometimes up to the 2000+ power, rather than caching the 15th power and using that.  Thus, every time I want to modify a probability vector for a particular number on the cipher, I have to wait 20+ minutes for it to finish recalculating. It would be nice to have a gazillion computers working in a grid though.

Nevertheless, as I have mentioned, one will have to proceed in layers; by this I mean the following.  Say you have an initial belief about the cipher's 1.  We can propagate this belief all the way up to the cipher's 2000.  But say you also have an initial probability vector of the ciphers 6.  At 6 the probability "waves" begin to collide.  It would be ideal if there were a manner to combine the probability (discrete) waves to obtain a more certain probability vector (kind of like a Kalman filter for discrete distributions): I could not think of any (although I want this to be the subject of a next post), or at least not any that would give me a more certain wave, so I opted to choose the wave that was more entropically close to 0.  Since we have 26 "boxes" with different proportions of "balls" in them (the respective probability of being a particular letter), we can use information entropy to opt for the one vector that carries the most "certain" belief, or the one with the least entropy.  Thus, if 6's prior contains less entropy than 1's propagated vector, I would opt to stick with 6's vector, and vice versa.  I'd do this with all vectors at every single position (thus why it takes about 20 minutes to process, too).

Of course, if Beale number 1 was encoded with a key with similar transition probabilities as the DOI, and this would be a reasonable assumption if we think Beale encoded the cipher using a text from his time (similar stylistically, etc.), then we can use the DOITPM to attempt a decoding of it.  If instead we believe that he used a text that he himself wrote, an analysis of Beale number 2 could yield a TPM that could crack it.  Lastly, if we believe that Beale number 1 was encoded with a key very much like any other book in the world, we could again examine such a TPM and attempt a decipherment.

In my next post or possibly upon revision of this one, I will post an .xls file containing the DOITPM and the probability vectors of each number in the cipher of Beale 1, having assumed that the cipher's 1 and 71 have the frequencies associated with the first letter of the English language, and all the rest have the standard frequencies of the letters of the English language as prior beliefs.  I also intend to post an .xls file except containing a TPM of a typical book.

The new probability vectors obtained in this fashion for each number of the cipher differ markedly from the typical letter frequencies: this makes sense, because the frequencies now depend on the transitions of the letters in the key itself.  This may be the reason why some cryptographers have disputed that the cipher is written in English (based on statistical arguments); I think it is written in English, only the proportions of the letters in the cipher changes because of the particular proportions of the first letters of words in the key.

If anyone is interested in the script files, I may post them too: they are .m files, so you can run them on Octave or Matlab both.  Leave a message below.

Otherwise if anyone is interested in hooking up computers for the processing of this (or donating money so I can buy several :)), also leave a message below or contact me using the contact form.  Thanks!