## On Patchixes and Patches - or Pasqualian Matrixes - (RWLA,MCT,GT,AM Part II)

For the last few months I have been thinking about several very curious properties of patchixes and patches (mentioned here); in particular, having studied patch behavior in a "continuous Markov chain" context, and, at having been drinking a bowl of cereal and observing the interesting movement of the remaining milk, it hit me: a patch could certainly describe milk movement at particular time steps. It is my hope to try to elucidate this concept a little better here today. In particular, I think I have discovered a new way to describe waves and oscillations, or rather, "cumulative movement where the amount of liquid is constant" in general, but, in my honest belief, I think this new way and the old way converge in limit (this based on my studies, here and here, or discrete Markov chains at the limit of tiny time steps, so that time is continuous), although it is a little bit unclear to me how at the moment. It is my hope that this new way not only paves the way for a new and rich field of research, but I foresee it clarifying studies in, for example, turbulence, and, maybe one day, Navier-Stokes related concepts. This last part may sound a little lofty and ambitious, but an approach in which, for example, vector fields of force or velocity need to be described for every particle and position of space, with overcomplicated second and third order partial derivatives, is in itself somewhat ambitious and lofty, and often prohibitive for finding exact solutions; perhaps studying particle accumulations through a method of approximation, rather than individual particles, is the answer.

I want to attempt to describe the roadmap that led me to the concept of a patchix (pasqualian matrix) in the first place; it was in the context of discrete Markov chains. Specifically, I thought that, as we study linear algebra, for a function or transformation , with is an n-vector with entries (finite), we have can be described succinctly by an matrix. Such a matrix then, converts into another n-dimensional vector, say . This field is very well studied of course: in particular, invertible transformations are very useful, and many matrixes can be used to describe symmetries, so that they underlie Group Theory:

Another useful transformation concept resides in , the space of sequences whose lengths squared (dot product with itself) converge, that was used, for example by Heisenberg, in quantum mechanics, as I understand it. For example, the sequence can be transported to another via , as by . Key here then was the fact that converged, so that , the norm, is defined. Also the dot product converges (why?). Importantly, however, this information points in the direction that a transformation matrix could be created for to facilitate computation, with an infinite number of entries, so that indeed a sequence is taken into another in this space in a manner that is easy and convenient. I think this concept was used by Kolmogorov in extending Markov matrices as well, but I freely admit I am not very versed in mathematical history. Help in this regard is muchly appreciated.

In function space such as , the inner product of, say, f(x) with g(x) is also defined, as , point-wise continuous multiplications of the functions summed absolutely convergently (which results from the integral). Then the norm of is . The problem is of course no convenient "continuous matrix" that results in the transform , although transforms of a kind can be achieved through a discrete matrix, if its coefficients represent, say, the coefficients of a (finite) polynomial. Thus, we can transform polynomials into other polynomials, but this is limiting in scope in many ways.

The idea is that we transform a function to another by point-wise reassignment: continuously. Thus the concept of a patchix (pasqualian matrix) emerges, we need only mimic the mechanical motions we go through when conveniently calculating any other matrix product. Take a function defined continuously on , send so that is now aligned with the y-axis. From the another viewpoint, consider as so that, at any value of , the cross-section looks like . Define a patchix on . Now "multiply" the function (actually a patchix itself from the different viewpoint) with the patchix as to obtain . The patchix has transformed as we wanted. I think there are profound implications from this simple observation; one may now consider, for example, inverse patchixes (or how to get back to , identity patchixes, and along with these one must consider what it may mean, as crazy as it sounds, to solve an infinite (dense) system of equations; powers of patchixes and what they represent; eigenpatchixvalues and eigenfunctionvectors; group theoretical concepts such as symmetry groups the patchixes may give rise to, etc.

As much as that is extremely interesting to me, and I plan on continuing with my own investigations, my previous post and informal paper considered the implications of multiplying functions by functions, functions by patchixes, and patchixes by patchixes. Actually I considered special kinds of patchixes , those having the property that for any specific value , then . Such special patchixes I dubbed *patches* (pasqualian special matrixes), and I went on to attempt an extension of a Markov matrix and its concept into a Continuous Markov Patch, along with the logical extension of the Chapman-Kolmogorov equation by first defining patch (discrete) powers (this basically means "patchix multiplying" a patch with itself). The post can be found here.

So today what I want to do is continue the characterization of patches that I started. First of all, emulating some properties of the Markov treatment, I want to show how we can multiply a probability distribution (function) "vector" by a patch to obtain another probability distribution function vector. Now this probability distribution is special, in the sense that it doesn't live in all of but in . A beta distribution, such as , is the type that I'm specifically thinking about. So suppose we have a function , which we must convert first to in preparation to multiply by the patch. Suppose then the patch is with the property that, for any specific , then . Now, the "patchix multiplication" is done by

and is a function of . We can show that this is indeed a probability distribution function vector by taking the integral for every infinitesimal change in , and see if it adds up to one, like this:

If there is no issue with absolute convergence of the integrals, there is no issue with the order of integration by the Fubini theorem, so we have:

Now for the inner integral, adds up to 1 for any choice of , so the whole artifact it is in effect a uniform distribution in with value 1 (i.e., for any choice of , the value of the integral is 1). Thus we have, in effect,

for any choice of in , and that last part we know is 1 by hypothesis.

Here's a specific example: Let's declare and . Of course, as required, . So then , and by "patchix multiplication"

Thus, via this particular patch, the function of , point by point. Which brings me to my next point.

If is really solely a function of , then it follows that any initial probability distribution becomes the patch function distribution (from the viewpoint of a single dimension, than two). Here's why:

I think, of course, a lot more interesting are patches that are in fact functions of both and of . There arises a problem in constructing them. For example, let's assume that we can split . Forcing our requirement that for any means:

which implies certainly that is a constant since the integral is a constant. Thus it follows that is a function of alone. Then we may try . Forcing our requirement again,

means that , again, a constant, and once more. Clearly the function interactions should be more complex, let's say something like: .

so that, determining three of the functions determines the last one, say

is in fact, a function of .

Let's construct a patch in this manner and see its effect on a . Let , and , and , so that

and .

So now the "patchix product" is

which *is* a probability distribution on the interval and, as a matter of check, we can integrate with respect to to obtain 1. Thus the probability distribution function is carried, point by point, as which, quite frankly, is very amusing to me!

From an analytical point of view, it may be interesting or useful to see what happens to the uniform distribution on when it's "patchix multiplied" by the patch above. We would have:

so that .

In my next post, I want to talk about more in detail about "patchix multiplication" of, not a probability distribution on [0,1] vectors by a patch, but of a patch by a patch, which is the basis of (self) patch powers: with this I want to begin a discussion on how we can map oscillations and movement in a different way, so that perhaps we can trace my cereal milk movement in time.

This reminds me of something, but I can't quite put my finger on it. I'll post it for you, if I can figure out just what

itis.I was thinking of a

Copula, but in retrospect I'm not sure this has any relevance other than squeezing a probability distribution into the [0,1] range.http://en.wikipedia.org/wiki/Copula_(statistics)

Very interesting -- I think I *am* talking about a copula, type (the patch does sum to a uniform distribution at the margin). I'll read it carefully... Thanks!!!!!!!