## On Patch(ix)es as Kernels of Integral Transforms (RWLA,MCT,GT,AM Part VII)

[This post is ongoing, as I think of a few things I will write them down too]

So just a couple of days ago I was asked by a student to give a class on DEs using Laplace transforms, and it was in my research that I realized that what I've been describing by converting a probability distribution on [0,1] to another is in effect a transform (minus the transform pair, which was unclear to me how to obtain, corresponding perhaps to inverting the patch(ix)). The general form of **integral transforms** is, according to my book Advanced Engineering, 2nd ed., by Michael Greenberg p. 247:

, where is called the **kernel** of the transform, and looks an awful lot like a function by patch(ix) "multiplication," which I described as:

you may recall. In the former context looks like a kernel, but here is a function of than of , and I sum across . To rewrite patch(ix)-multiplication as an integral transform, it would seem we need to rethink the patch position on the xy plane, but it seems easy to do (and we do in number 1 below!).

In this post I want to (eventually be able to):

1. Formally rewrite my function-by-patch(ix) multiplication as a "Pasquali" integral transform.

If we are to modify patch multiplication to match the integral transform guideline, simply think of as oriented a bit differently, yielding the fact that for any choice of . Then, for a probability distribution in [0,1], the integral transform is . Now is indeed then a kernel.

2. Extend a function-by-patch multiplication to probability distributions and patches on all and , respectively.

When I began thinking about probability distributions, I restricted them to the interval [0,1] and a patch on , to try to obtain a strict analogy of (continuous) matrices with discrete matrices. I had been thinking for a while that this need not be the case, but when I glanced at the discussion of integral transforms on my Greenberg book, and particularly the one on the Laplace transform, I realized I could have done it right away. Thus, we can redefine patch multiplication as

with

3. Explore the possibility of an inverse-patch via studying inverse-transforms.

3a. Write the patch-inverse-patch relation as a transform pair.

4. Take a hint from the Laplace and Fourier transforms to see what new insights can be obtained on patch(ix)es (or vice-versa).

Vice-versa: Well one of the things we realize first and foremost, is that integral transforms are really an extension of the concept of matrix multiplication: if we create a matrix "surface" and multiply it by a "function" vector we obtain another "function," and the kernel (truly a continuous matrix) is exactly our path connecting the two. Can we not think now of discrete matrices (finite, infinite) as "samplings" of such surfaces? I think so. We can also combine kernels with kernels (as I have done in previous posts) much as we can combine matrices with matrices. I haven't really seen a discussion exploring this in books, which is perhaps a bit surprising. At any rate, recasting this "combination" shouldn't be much of a problem, and the theorems I proved in previous posts should still hold, because the new notation represents rigid motions of the kernel, yielding new kernel spaces that are isomorphic to the original.