### Archive

Archive for the ‘Group Theory’ Category

## On Patch by Patch Products, Part II - (RWLA,MCT,GT,AM Part IV)

Last time I talked about a concept I invented, and based on my studies on Markov chains.  They are, essentially, "continuous matrices" (a surface on $[0,1] \times [0,1]$) with the property that they add to 1 if we take the integral with respect to $x$ for any $y$, in analogy to the requirement in the usual Markov matrix treatment.  I dubbed such "patches," and explained a way to construct them.  In my previous post, I began thinking that patches seem to be very special, in the sense that self patch powers can represent the state of a liquid in time, if we allow ourselves to be a little imaginative.  Let's say that we disturb a uniform distributed patch to an initial state, the initial state patch, like this:

$p(x,y) = 1 - cos(2 \pi x) cos(2 \pi y)$

It is easy to see that if we integrate with respect to $x$ our result is 1, so that it is indeed a patch. (I also constructed this function by letting $f_1(x) = cos(2 \pi x), g_1(y) = cos(2 \pi y), f_2(x) = 1$ and calculated that $g_2(y) = 1$ using the technique I talked about here).

Let's say we have depressed the liquid at the four corners and center of the confined space (necessarily a cube of dimensions $1 \times 1 \times h$), essentially giving it energy. Next calculate the patch powers (as described in my previous posts).  Interestingly, if we map the patch powers of such liquid, they will converge to a steady state, just like Markov matrixes would:

$p(x,y) = 1 - cos(2 \pi x) cos(2 \pi y)$

$p_2(x,y) = 1 + \frac{cos(2 \pi x) cos(2 \pi y)}{2}$

$p_3(x,y) = 1 - \frac{cos(2 \pi x) cos(2 \pi y)}{4}$

$p_4(x,y) = 1 + \frac{cos(2 \pi x) cos(2 \pi y)}{8}$

$p_5(x,y) = 1 - \frac{cos(2 \pi x) cos(2 \pi y)}{16}$

$p_6(x,y) = 1 + \frac{cos(2 \pi x) cos(2 \pi y)}{32}$

The evolution in time of this particular patch is easy to guess (although I should, technically, prove this by induction... I do in my next post):

$p_t(x,y) = 1 - \frac{cos(2 \pi x) cos(2 \pi y)}{(-2)^{t-1}}$

for $t \in \mathbb{Z^+}$, and letting this parameter represent both time and the patch power.  Of course, if we integrate with respect to $x$ any of these, the result is 1, and so, they are, indeed, patches.

I would like to state in a different post the conditions in which a steady-state is achievable; my suspicion is that, in analogy to Markov chains, steady-state happens if the patch is non-zero for some power (and above) on $[0,1] \times [0,1]$, a property that is called regularity within that context, and of course, I would like to be able to calculate the steady state as easily as it can be done with discrete Markov chains (I was afraid that, in this particular example, I wouldn't be able to achieve steady state because of the initial patch having zeroes at the corners and center).  It's pinned as one of my to-dos.  At any rate, the fact that there are patches that converge to a state (a 2D surface), and, specifically, that can converge to the uniform distribution surface, suggests that such systems, from the viewpoint of Physics, must dissipate energy and there is linked the concept of entropy.  Of course from a probabilistic point of view, entropy in this sense is non-existent; patches merely describe the probability of movement to another "position" on each $y$ fiber.

There are of course patches that do not converge to the uniform surface distribution, but to other types: in my previous post, the patch I constructed converged to a plane that is tilted in the unit cube.  I wonder if such cannot have a physical interpretation that relates it to gravity: the liquid experiences a uniform acceleration (gravity) normal to the (converging) plane, which of course says, from a physical interpretation, "the cube is tilted."  Again from the probabilistic point of view, the concept of gravity is an explanatory link to Physics, but the end-state arises without its action on the fluid at all!

There are fun topological considerations too: the fact that we can do this on the unit cube does not preclude us from doing it in, for example, a unit cylinder (a cup or mug!), provided we can find the appropriate retract-into-the-square function and vice-versa.  This I think might be very interesting to map movement in all kinds of containers.

I have already talked about a couple potential venues in Group Theory, which I really would also like to go into further at some point.

As in other posts, another possible area of investigation is the evolution of the surface in smaller bits of time. I was able to link, in previous Markov treatments, discrete representations of Markov chains to continuous time differential equations.  Here is where it would be immensely interesting to see if patches, under this light, do not converge to partial differential equation representations.  Which leads me to the last point, regarding Navier-Stokes turbulent flow (which I admit know very little about), and a potential link to its differential equation representation:

Here is why I think that turbulent flow can be explained by generalizing patches a little bit, to "megapatches" (essentially 3d-patches or tensors), since, now we can think not of a 2D surface converging, but a 3D one in time: a water sphere in space (I once saw a cool video on this and was left thoroughly fascinated) or a water balloon being poked could be understood this way, for example, so that the movement of water throughout the flexible container could be similarly traced (by mapping the probability of movement in the container)!  I need to flesh this out a little more, but I think it's also very interesting, potentially.

These studies make me ask myself, again: what is the relationship between stochastic processes and deterministic representations?  They seem to be too intimately linked to be considered separate.

## On Patch by Patch Products - (RWLA,MCT,GT,AM Part III)

In my previous post, I described the concept of a "patchix" and of a special kind, the "patch." I described how to multiply a continuous function on $[0,1]$ by a patch(ix). Today I want to talk about how to multiply a patch by a patch and certain properties of it.

In my description in my informal paper, I basically said that in order to (right) multiply a patchix by a patchix, say $p(x,y)$ with $q(x,y)$, we would have to send $p(x,y) \rightsquigarrow p(1-y,t)$ and then integrate as:

$r(x,t) = \int_0^1 p(1-y,t) \cdot q(x,y) dy \rightsquigarrow r(x,y)$

If the patchix is furthermore special, so that both $\int_0^1 q(x,y) dx = \int_0^1 p(x,y) dx = u(y) = 1$, the uniform distribution on [0,1] (so that $u$ of any fiber is 1), then $p(x,y)$ and $q(x,y)$ are "patches." I want to show that, when we "patchix multiply" two patches we obtain another one. Here's why: assume then $p(x,y)$ and $q(x,y)$ are "patches." Then the resultant $r(x,t)$ is a patch too if $\int_0^1 r(x,t) dx = u(t) = 1$. Thus:

$\int_0^1 r(x,t) dx = \int_0^1 \int_0^1 p(1-y,t) \cdot q(x,y) dy dx$

If there is no issue with absolute convergence of the integrals (as there shouldn't be in a patch), by the Fubini theorem we can exchange the order of integration:

$\int_0^1 \int_0^1 p(1-y,t) \cdot q(x,y) dx dy = \int_0^1 p(1-y,t) \int_0^1 q(x,y) dx dy$

The inner integral evaluates to $u(y) = 1$ by hypothesis. Then we have $\int_0^1 p(1-y,t) dy = u(t) = 1$ because the transformation $x \rightsquigarrow 1-y$ changes the orientation so that the direction of integrating to 1 now changes to be in the $y$ direction ($dx \rightsquigarrow -dy$). Nicely, we have just proven closure of patches.

Because this strongly suggests that patches may form a group (as may patchixes with other properties), I want to attempt to show associativity, identity and inverses of patches in my next post (and of other patchixes with particular properties).

For now, I'm a little more interested in solving a concrete example by calculating self-powers. In my last post, I constructed the following patch:

$p(x,y) = x^2 y^3 + x \left( 2 - \frac{2 y^3}{3} \right)$

To calculate the second power, send $p(x,y) \rightsquigarrow p(1-y,t)$. I get, in expanded form, from my calculator:

$p(1-y,t) = t^3 y^2 - \frac{4 t^3 y}{3} + \frac{t^3}{3} - 2y + 2$

so that

$p_2(x,t) = \int_0^1 p(1-y,t) \cdot p(x,y) dy = \frac{29 x}{15} + \frac{t^3 x}{90} + \frac{x^2}{10} - \frac{t^3 x^2}{60}$

Last, let's send $p_2(x,t) \rightsquigarrow p_2(x,y) = \frac{29 x}{15} + \frac{y^3 x}{90} + \frac{x^2}{10} - \frac{y^3 x^2}{60}$

We can corroborate that this is a patch by integrating

$\int_0^1 p_2(x,y) dx = \int_0^1 \frac{29 x}{15} + \frac{y^3 x}{90} + \frac{x^2}{10} - \frac{y^3 x^2}{60} dx = 1$ which is indeed the case.

To calculate the third power, we can:

$p_3(x,t) = \int_0^1 p(1-y,t) \cdot p_2(x,y) dy = \frac{1741 x}{900} - \frac{t^3 x}{5400} + \frac{59 x^2}{600} + \frac{t^3 x^2}{3600}$

Then, send $p_3(x,t) \rightsquigarrow p_3(x,y) = \frac{1741 x}{900} - \frac{y^3 x}{5400} + \frac{59 x^2}{600} + \frac{y^3 x^2}{3600}$

Again, we can corroborate that this is a patch by

$\int_0^1 p_2(x,y) dx = \int_0^1 \frac{1741 x}{900} - \frac{y^3 x}{5400} + \frac{59 x^2}{600} + \frac{y^3 x^2}{3600} dx = 1$

which is the case.

Here's a countour 3D-plot of $p(x,y), p_2(x,y) \ldots p_7(x,y)$: in other words, the 7-step time evolution of the patch (the "brane").  By looking at the plot, you can probably begin to tell where I'm trying to get at:  the patch evolution shows how a fluid could evolve in time (its movement, oscillation), provided the appropriate first-patch generator can be found for a particular movement.  The fact that, if patches mirror Markov "thinking", a patch that will eventually settle to its long-term stable distribution means that this (patch) treatment, when applied to the physical world, takes into account some sort of "entropy," or loss of energy of the system.  Also some sort of "viscosity," is my belief.  The patch evolution catches nicely and inherently all relevant physical properties.  I will continue to explore this in my next post, I think.

The above image has been scaled differently for the different functions so that they can be better seen as they converge.  In my next post, I would like to expound on the evolution of the following first-patch:

## On revolutionizing the whole of Linear Algebra, Markov Chain Theory, Group Theory... and all of Mathematics

I have been so remiss about writing here lately!  I'm so sorry!  There are several good reasons for this, believe me.  Among them: (1) I have been enthralled with deciphering a two hundred-year old code, the Beale cipher part I, with no substantial results except several good ideas that I may yet pursue and expound on soon here.  But this post is not intended to be about that.  (2) My computer died around December and I got a new one and I hadn't downloaded TEX; I used this as an excuse not to write proofs from Munkres's Topology chapter 1, and so, I have added none.  I slap myself for this (some of the problems are really boring, although they are enlightening in some ways, I have to admit, part of the reason why I began doing them in the first place). (3) The drudgery of day to day work, which is soooo utterly boring that it leaves me little time for "fun," or math stuff, and my attention being constantly hogged by every possible distraction, at home, etc.  Anyway.

For a few months now I have been reading a lot on Markov chains because they have captured my fancy recently (they are so cool), and in fact they tie in to a couple projects I've been having or been thinking about.  I even wrote J. Laurie Snell because a chapter on his book was excellent (the one on Markov chains) with plenty of amazing exercises that I really enjoyed.  In looking over that book and a Schaum's outline, a couple questions came to my head and I just couldn't let go of these thoughts; I even sort of had to invent a concept that I want to describe here.

So in my interpretation of what a Markov chain is, and really with zero rigor, consider you have $n < \infty$ states, position yourself at $i$.  In the next time period, you are allowed to change state if you want, and you will jump to another state $j$ (possibly $i$) with probability $p_ij$ (starting from $i$).  These probabilities can be neatly summarized in a finite $n \times n$ matrix, with each row being a discrete distribution of your jumping probabilities, and therefore each row sums to 1 in totality.  I think it was Kolmogorov who extended the idea to an infinite matrix, but we must be careful with the word "infinite,"  as the number of states are still countable, and so they are summarized by an $\infty \times \infty$ countably infinite matrix.  Being keen that you are, dear reader, you know I'm setting this question up:  What would an uncountably infinite transition probability matrix look like?  No one seems to be thinking about this, or at least I couldn't find any literature on the subject.  So here are my thoughts:

The easiest answer is to consider a state $i$ to be any of the real numbers in an interval, say $[0,1]$, and to imagine such a state can change to any other state on such a real interval (that is isomorphic to any other connected closed interval of the same type, as we may know from analysis).  This is summarized by a continuous probability distribution on $[0,1]$, whose sum is again 1; a good candidate is a beta function, such as $6 x (1-x)$, with parameters (2,2).  I think we can "collect" such probability distributions continuously on $[0,1] \times [0,1]$: a transition probability patch, as I've been calling it.   It turns out that it becomes important, if patches are going to be of any use in the theory, to be able to raise the patch to powers (akin to raising matrixes to powers), to multiply patches by (function) vectors and other tensors, and to extend the common matrix algebra to conform to patches; but this is merely a mechanical problem, as I describe in the following pdf.  (Comments are very welcome, preferably here on the site!).

CSCIMCR

As you may be able to tell, I've managed to go quite a long ways with this, so that patches conform reasonably to a number discrete Markov chain concepts, including a patch version of the Chapman-Kolmogorov equations; but having created patches, there is no reason why we cannot extend the idea to "patchixes" or continuous matrixes on $[0,1] \times [0,1]$ without the restriction that each row cross-section sum to 1; in fact it seems possible to define identity patchixes (patches), and, in further work (hopefully I'll be involved in it), kernels, images, eigenvalues and eigenvectors of patchixes, commuting patchixes, commutator patchixes, and a slew of group theoretical concepts.

Having defined a patchix, if we think of the values of the patchix as the coefficients in front of, say, a polynomial, can we not imagine a new "polynomial" object that runs through exponents of say $x$ continuously between $[0,1]$ with each term being "added" to another? (Consider for example something like $\sum_i g(i)x^i, i \in [0,1]$?)  I think these are questions worth asking, even if they are a little bit crazy, and I do intend to explore them some, even if it later turns out it's a waste of time.

Categories: