Archive

Archive for April 22nd, 2010

On revolutionizing the whole of Linear Algebra, Markov Chain Theory, Group Theory... and all of Mathematics

April 22nd, 2010 No comments

I have been so remiss about writing here lately!  I'm so sorry!  There are several good reasons for this, believe me.  Among them: (1) I have been enthralled with deciphering a two hundred-year old code, the Beale cipher part I, with no substantial results except several good ideas that I may yet pursue and expound on soon here.  But this post is not intended to be about that.  (2) My computer died around December and I got a new one and I hadn't downloaded TEX; I used this as an excuse not to write proofs from Munkres's Topology chapter 1, and so, I have added none.  I slap myself for this (some of the problems are really boring, although they are enlightening in some ways, I have to admit, part of the reason why I began doing them in the first place). (3) The drudgery of day to day work, which is soooo utterly boring that it leaves me little time for "fun," or math stuff, and my attention being constantly hogged by every possible distraction, at home, etc.  Anyway.

For a few months now I have been reading a lot on Markov chains because they have captured my fancy recently (they are so cool), and in fact they tie in to a couple projects I've been having or been thinking about.  I even wrote J. Laurie Snell because a chapter on his book was excellent (the one on Markov chains) with plenty of amazing exercises that I really enjoyed.  In looking over that book and a Schaum's outline, a couple questions came to my head and I just couldn't let go of these thoughts; I even sort of had to invent a concept that I want to describe here.

So in my interpretation of what a Markov chain is, and really with zero rigor, consider you have  n < \infty states, position yourself at  i .  In the next time period, you are allowed to change state if you want, and you will jump to another state  j (possibly  i ) with probability  p_ij (starting from  i ).  These probabilities can be neatly summarized in a finite  n \times n matrix, with each row being a discrete distribution of your jumping probabilities, and therefore each row sums to 1 in totality.  I think it was Kolmogorov who extended the idea to an infinite matrix, but we must be careful with the word "infinite,"  as the number of states are still countable, and so they are summarized by an  \infty \times \infty countably infinite matrix.  Being keen that you are, dear reader, you know I'm setting this question up:  What would an uncountably infinite transition probability matrix look like?  No one seems to be thinking about this, or at least I couldn't find any literature on the subject.  So here are my thoughts:

The easiest answer is to consider a state  i to be any of the real numbers in an interval, say  [0,1] , and to imagine such a state can change to any other state on such a real interval (that is isomorphic to any other connected closed interval of the same type, as we may know from analysis).  This is summarized by a continuous probability distribution on  [0,1] , whose sum is again 1; a good candidate is a beta function, such as  6 x (1-x) , with parameters (2,2).  I think we can "collect" such probability distributions continuously on  [0,1] \times [0,1] : a transition probability patch, as I've been calling it.   It turns out that it becomes important, if patches are going to be of any use in the theory, to be able to raise the patch to powers (akin to raising matrixes to powers), to multiply patches by (function) vectors and other tensors, and to extend the common matrix algebra to conform to patches; but this is merely a mechanical problem, as I describe in the following pdf.  (Comments are very welcome, preferably here on the site!).

CSCIMCR

As you may be able to tell, I've managed to go quite a long ways with this, so that patches conform reasonably to a number discrete Markov chain concepts, including a patch version of the Chapman-Kolmogorov equations; but having created patches, there is no reason why we cannot extend the idea to "patchixes" or continuous matrixes on  [0,1] \times [0,1] without the restriction that each row cross-section sum to 1; in fact it seems possible to define identity patchixes (patches), and, in further work (hopefully I'll be involved in it), kernels, images, eigenvalues and eigenvectors of patchixes, commuting patchixes, commutator patchixes, and a slew of group theoretical concepts.

Having defined a patchix, if we think of the values of the patchix as the coefficients in front of, say, a polynomial, can we not imagine a new "polynomial" object that runs through exponents of say  x continuously between  [0,1] with each term being "added" to another? (Consider for example something like  \sum_i g(i)x^i, i \in [0,1] ?)  I think these are questions worth asking, even if they are a little bit crazy, and I do intend to explore them some, even if it later turns out it's a waste of time.