Home > Combinatorics and Probability, Linear Algebra, Markov Chains, Mathematics > On Patchixes and Patches - or Pasqualian Matrixes - (RWLA,MCT,GT,AM Part II)

On Patchixes and Patches - or Pasqualian Matrixes - (RWLA,MCT,GT,AM Part II)

For the last few months I have been thinking about several very curious properties of patchixes and patches (mentioned here); in particular, having studied patch behavior in a "continuous Markov chain" context, and, at having been drinking a bowl of cereal and  observing the interesting movement of the remaining milk, it hit me: a patch could certainly describe milk movement at particular time steps.  It is my hope to try to elucidate this concept a little better here today.  In particular, I think I have discovered a new way to describe waves and oscillations, or rather, "cumulative movement where the amount of liquid is constant" in general, but, in my honest belief, I think this new way and the old way converge in limit (this based on my studies, here and here, or discrete Markov chains at the limit of tiny time steps, so that time is continuous), although it is a little bit unclear to me how at the moment.  It is my hope that this new way not only paves the way for a new and rich field of research, but I foresee it clarifying studies in, for example, turbulence, and, maybe one day, Navier-Stokes related concepts.  This last part may sound a little lofty and ambitious, but an approach in which, for example, vector fields of force or velocity need to be described for every particle and position of space, with overcomplicated second and third order partial derivatives, is in itself somewhat ambitious and lofty, and often prohibitive for finding exact solutions;  perhaps studying particle accumulations through a method of approximation, rather than individual particles, is the answer.

I want to attempt to describe the roadmap that led me to the concept of a patchix (pasqualian matrix) in the first place; it was in the context of discrete Markov chains.  Specifically, I thought that, as we study linear algebra, for a function or transformation  T(\textbf{v}) , with  \textbf{v} is an n-vector with  n entries (finite), we have  T can be described succinctly by an  n \times n matrix.  Such a matrix then, converts  \textbf{v} into another n-dimensional vector, say  \textbf{w} .  This field is very well studied of course: in particular, invertible transformations are very useful, and many matrixes can be used to describe symmetries, so that they underlie Group Theory:

 \textbf{v} \underrightarrow{T} \textbf{w}

Another useful transformation concept resides in  l_2 , the space of sequences whose lengths squared (dot product with itself) converge, that was used, for example by Heisenberg, in quantum mechanics, as I understand it.  For example, the sequence  x_1 + x_2 + \ldots can be transported to another  y_1 + y_2 + \ldots via  T , as by  T(x_1 + x_2 + \ldots) = y_1 + y_2 + \ldots .  Key here then was the fact that  x_1^2 + x_2^2 + \ldots converged, so that  \sqrt{x_1^2 + x_2^2 + \ldots} , the norm, is defined.  Also the dot product  x_1 y_1 + x_2 y_2 + \ldots converges (why?).  Importantly, however, this information points in the direction that a transformation matrix could be created for  T to facilitate computation, with an infinite number of entries, so that indeed a sequence is taken into another in this space in a manner that is easy and convenient.  I think this concept was used by Kolmogorov in extending Markov matrices as well, but I freely admit I am not very versed in mathematical history.  Help in this regard is muchly appreciated.

In function space such as  C^{\infty}[0,1] , the inner product of, say, f(x) with g(x) is also defined, as  \langle f(x), g(x) \rangle = \int_0^{1} f(x) \cdot g(x) dx , point-wise continuous multiplications of the functions summed absolutely convergently (which results from the integral).  Then the norm of  f(x) is  \sqrt{\langle f(x), f(x) \rangle} = \sqrt{\int_0^{1} f(x)^2 dx} .  The problem is of course no convenient "continuous matrix" that results in the transform  T(f(x)) = g(x) , although transforms of a kind can be achieved through a discrete matrix, if its coefficients represent, say, the coefficients of a (finite) polynomial.  Thus, we can transform polynomials into other polynomials, but this is limiting in scope in many ways.

The idea is that we transform a function to another by point-wise reassignment: continuously.  Thus the concept of a patchix (pasqualian matrix) emerges, we need only mimic the mechanical motions we go through when conveniently calculating any other matrix product.  Take a function  f(x) defined continuously on  [0,1] , send  x \rightsquigarrow 1-y so that  f(1-y) is now aligned with the y-axis. From the another viewpoint, consider  f(1-y) as  f(1-y,t) so that, at any value of  t , the cross-section looks like  f .  Define a patchix  p(x,y) on  [0,1] \times [0,1] .  Now "multiply" the function (actually a patchix itself from the different viewpoint) with the patchix as  \int_{0}^{1} f(1-y) \cdot p(x,y) dy = g(x) to obtain  g(x) .  The patchix has transformed  f(x) \rightsquigarrow g(x) as we wanted.  I think there are profound implications from this simple observation; one may now consider, for example, inverse patchixes (or how to get  g(x) back to  f(x) , identity patchixes, and along with these one must consider what it may mean, as crazy as it sounds, to solve an infinite (dense) system of equations; powers of patchixes and what they represent; eigenpatchixvalues and eigenfunctionvectors; group theoretical concepts such as symmetry groups the patchixes may give rise to, etc.

As much as that is extremely interesting to me, and I plan on continuing with my own investigations, my previous post and informal paper considered the implications of multiplying functions by functions, functions by patchixes, and patchixes by patchixes.  Actually I considered special kinds of patchixes  p(x,y) , those having the property that for any specific value  y_c \in [0,1] , then  \int_0^1 p(x,y_c) dx = 1 .  Such special patchixes I dubbed patches (pasqualian special matrixes), and I went on to attempt an extension of a Markov matrix and its concept into a Continuous Markov Patch, along with the logical extension of the Chapman-Kolmogorov equation by first defining patch (discrete) powers (this basically means "patchix multiplying" a patch with itself).  The post can be found here.

So today what I want to do is continue the characterization of patches that I started.  First of all, emulating some properties of the Markov treatment, I want to show how we can multiply a probability distribution (function) "vector" by a patch to obtain another probability distribution function vector. Now this probability distribution is special, in the sense that it doesn't live in all of  \mathbb{R} but in  [0,1] .  A beta distribution, such as  B(2,2) = 6(x)(1-x) , is the type that I'm specifically thinking about. So suppose we have a function  b(x) , which we must convert first to  b(1-y) in preparation to multiply by the patch.  Suppose then the patch is  p(x,y) with the property that, for any specific  y_c , then  \int_0^1 p(x,y_c) dx = 1 .  Now, the "patchix multiplication" is done by

 \int_0^1 b(1-y) \cdot p(x,y) dy

and is a function of  x .  We can show that this is indeed a probability distribution function vector by taking the integral for every infinitesimal change in  x , and see if it adds up to one, like this:

 \int_0^1 \int_0^1 b(1-y) \cdot p(x,y) dy dx

If there is no issue with absolute convergence of the integrals, there is no issue with the order of integration by the Fubini theorem, so we have:

 \int_0^1 \int_0^1 b(1-y) \cdot p(x,y) dx dy = \int_0^1 b(1-y) \int_0^1 p(x,y) dx dy

Now for the inner integral,  p(x,y) adds up to 1 for any choice of  y , so the whole artifact it is in effect a uniform distribution in  [0,1] with value 1 (i.e., for any choice of  y \in [0,1] , the value of the integral is 1).  Thus we have, in effect,

 \int_0^1 b(1-y) \int_0^1 p(x,y) dx dy = \int_0^1 b(1-y) \cdot u(y) dy = \int_0^1 b(1-y) (1) dy

for any choice of  y in  [0,1] , and that last part we know is 1 by hypothesis.

Here's a specific example:  Let's declare  b(x) = 6(x)(1-x) and  p(x,y) = x + \frac{1}{2} .  Of course, as required,  \int_0^1 p(x,y) dx = \int_0^1 x + \frac{1}{2} dx = (\frac{x^2}{2} + \frac{x}{2}) \vert^1_0 = 1 .  So then  b(1-y) = 6(1-y)(y) , and by "patchix multiplication"

 \int_0^1 b(1-y) \cdot p(x,y) dy = \int_0^1 6(1-y)(y) \cdot \left(x + \frac{1}{2} \right) dy = x + \frac{1}{2}

Thus, via this particular patch, the function of  b(x) = 6(x)(1-x) \rightsquigarrow c(x) = x + \frac{1}{2} , point by point.  Which brings me to my next point.

If  p(x,y) is really solely a function of  x , then it follows that  b(x) \rightsquigarrow p(x) any initial probability distribution becomes the patch function distribution (from the viewpoint of a single dimension, than two).  Here's why:

 \int_0^1 b(1-y) \cdot p(x,y) dy = \int_0^1 b(1-y) \cdot p(x) dy = p(x) \int_0^1 b(1-y) dy = p(x)

I think, of course, a lot more interesting are patches that are in fact functions of both  x and of  y .  There arises a problem in constructing them.  For example, let's assume that we can split  p(x,y) = f(x) + g(y) .  Forcing our requirement that  \int_0^1 p(x,y) dx = 1 for any  y \in [0,1] means:

 \int_0^1 p(x,y) dx = \int_0^1 f(x) dx + g(y) \int_0^1 dx = \int_0^1 f(x) dx + g(y) = 1

which implies certainly that   g(y) = 1 - \int_0^1 f(x) dx is a constant since the integral is a constant.  Thus it follows that  p(x,y) = p(x) is a function of  x alone.  Then we may try  p(x,y) = f(x) \cdot g(y) .  Forcing our requirement again,

 \int_0^1 p(x,y) dx = \int_0^1 f(x) \cdot g(y) dx = g(y) \int_0^1 f(x) dx = 1

means that  g(y) = \frac{1}{\int_0^1 f(x) dx} , again, a constant, and  p(x,y) = p(x) once more.  Clearly the function interactions should be more complex, let's say something like:  p(x,y) = f_1(x) \cdot g_1(y) + f_2(x) \cdot g_2(y) .

 \int_0^1 p(x,y) dx = g_1(y) \int_0^1 f_1(x) dx + g_2(y) \int_0^1 f_2(x) dx = 1

so that, determining three of the functions determines the last one, say

 g_2(y) = \frac{1-g_1(y) \int_0^1 f_1(x) dx}{\int_0^1 f_2(x) dx} is in fact, a function of  y .

Let's construct a patch in this manner and see its effect on a  B(2,2) .  Let  f_1(x) = x^2 , and  g_1(y) = y^3 , and  f_2(x) = x , so that

 g_2(y) = \frac{1 - g_1(y) \int_0^1 f_1(x) dx}{\int_0^1 f_2(x) dx} = \frac{1 - y^3 \int_0^1 x^2 dx}{\int_0^1 x dx} = \frac{1 - \frac{y^3}{3}}{\frac{1}{2}} = 2 - \frac{2y^3}{3}

and  p(x,y) = x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) .

So now the "patchix product" is

 \int_0^1 6(1-y)(y) \cdot \left(x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) \right) dy = \frac{x^2}{5} + \frac{28x}{15} which is a probability distribution on the interval  [0,1] and, as a matter of check, we can integrate with respect to  x to obtain 1.  Thus the probability distribution function  6(x)(1-x) is carried, point by point, as  6(x)(1-x) \rightsquigarrow \frac{x^2}{5} + \frac{28x}{15} which, quite frankly, is very amusing to me!

From an analytical point of view, it may be interesting or useful to see what happens to the uniform distribution on  [0,1] when it's "patchix multiplied" by the patch above.  We would have:

 \int_0^1 u(y) \cdot \left(x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) \right) dy = \int_0^1 (1) \cdot \left(x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) \right) dy = \frac{x^2}{4} + \frac{11x}{12}

so that  u(x) \rightsquigarrow \frac{x^2}{4} + \frac{11x}{12} .

In my next post, I want to talk about more in detail about "patchix multiplication" of, not a probability distribution on [0,1] vectors by a patch, but of a patch by a patch, which is the basis of (self) patch powers: with this I want to begin a discussion on how we can map oscillations and movement in a different way, so that perhaps we can trace my cereal milk movement in time.

  1. EastwoodDC
    October 11th, 2010 at 14:54 | #1

    This reminds me of something, but I can't quite put my finger on it. I'll post it for you, if I can figure out just what it is.

  2. EastwoodDC
    October 11th, 2010 at 15:08 | #2

    I was thinking of aCopula, but in retrospect I'm not sure this has any relevance other than squeezing a probability distribution into the [0,1] range.
    http://en.wikipedia.org/wiki/Copula_(statistics)

  3. October 11th, 2010 at 17:17 | #3

    Very interesting -- I think I *am* talking about a copula, type (the patch does sum to a uniform distribution at the margin). I'll read it carefully... Thanks!!!!!!!

  1. No trackbacks yet.

*