Archive

Posts Tagged ‘patchix’

On Infinite Term Functions with Converging Integrals, Part II

May 5th, 2014 No comments

This post is a continuation of a previous one. I have developed several preparatory (in preparation of other) claims or theorems:  The first and second claim show that a particular collection of finite polynomial functions have area of 1 in the interval [0,1], and are hence also Pasquali patches.  The third and fourth shows that the finite sum of any such functions actually have converging integrals in the same interval.  The corollaries show that this is not the case if the sum is infinite.  A fun way to summarize this information is soon after developed, and these observations, though simple, lead us to classify all Pasquali patches which are functions of x alone, and therefore all stationary/limiting/stable surfaces, eigenfunctions or wavevectors (from the quantum mechanics point-of-view).

Claim 1. Take f_i(x) = (i+1) x^i with i = 0 \ldots n.  Then \int_0^1 f_i(x) \, dx = 1, \forall i.

Proof by Definition of Integration (Inducing).  We show that \int_0^1 f_0(x) \, dx = 1. The expression equals 

\int_0^1 x^0 \, dx = \int_0^1 1 \, dx = x \left. \right\vert_0^1 = 1

 We assume that the kth element \int_0^1 f_k(x) \, dx = 1 although we readily know by the definition of integration that such is true, since 

\int_0^1 (k+1) x^k \, dx = x^{k+1} \left. \right\vert_0^1 = 1^{k+1} = 1

The exact same definition argument applies to the k+1th element and 

\int_0^1 (k+2) x^{k+1} \, dx = x^{k+2} \left. \right\vert_0^1 = 1^{k+2} = 1

Claim 2. The functions f_i(x) = (i+1) x^i with i = 0 \ldots n are Pasquali patches.

Proof.  A Pasquali patch is a function p(x,y) so that \int_0^1 p(x,y) dx = 1.  Let p(x,y) = f_i(x).  Since by Claim 1  \int_0^1 f_i(x) \, dx = 1, \forall i = 1 \ldots n, then applying the definition means  f_i(x) = (i+1) x^i are Pasquali patches \forall i= 0 \ldots n .

Claim 3.  The finite polynomial g(x) = \sum_{i=0}^n (i+1) x^{i} converges in area from [0,1] to n+1.

Proof.  We are looking for

\int_0^1 \sum_{i=0}^n (i+1) x^i \, dx

.  The sum is finite so it converges, and there is no issue exchanging the order of the sum and integral. Thus:

\sum_{i=0}^n \int_0^1 (1+i) x^i \, dx =\sum_{i=0}^n \left( x^{i+1} \left. \right\vert_0^1 \right) = \sum_{i=0}^n 1^{i+1} =\sum_{i=0}^n 1 = n+1

Claim 4. Pick n functions from the pool of f_i(x) = (i+1) x^i.  For example, pick f_3(x), f_5(x), and f_7(x).  Create the function h(x) = \sum_i f_i(x).  Then \int_0^1 h(x) \, dx = n.

Proof by induction.  Since by Claim 2 all f_i(x) are Pasquali patches, it follows their integral is 1 in the interval (Claim 1).  Picking 1 function from the pool thus gives an integral of 1 in the interval.  Suppose that picking k functions gives k units at the integral in the interval. Now pick k+1 functions.  The first k functions give k units at the integral in the interval, and the 1 additional function contributes 1 unit at the integral in the interval.  Thus k+1 functions contribute k+1 units at the integral in the interval.

Corollary 1. The infinite polynomial a(x) = \sum_{i=0}^\infty (i+1) x^i diverges in area in the interval from [0,1].

Proof.  Take

\int_0^1 \left( \lim_{n \to \infty} \sum_{i=0}^n (1+i) x^i \right) \, dx =\lim_{n \to \infty} \int_0^1\sum_{i=0}^n (1+i) x^i \, dx

Here exchanging the order of limit and integral is justified by the fact that, term-wise, the integral converges. Next  

\lim_{n \to \infty} n+1 = \infty

Here the second to last step is justified by Claim 3.

Corollary 2.  The infinite polynomial a(x) - h(x) diverges in area in the interval from [0,1].

Proof.  Take the limit

 \lim_{n \to \infty} \left[ a(x) - h(x) \right]

Taking n to infinity applies to a(x) only which we know diverges by Corollary 1.  The same limit  has no effect on h(x) as the sum it is composed of is finite and adds up to an integer constant, say m.  We conclude that any infinite collection of terms of f_i(x) diverges, even when a finite number of them may be absent from the sum.

And now sushi.

Corollary 3.  The infinite polynomial  a(x) - b(x) diverges in area in the interval from [0,1] with a(x), b(x) are infinite polynomials constructed by sums of functions picked from the pool f_i(x) = (i+1) x^i and with no repetitions. (Note that the difference of these two infinite polynomials must also be infinite).

Proof. Since the a(x) - b(x) is an infinite polynomial, the integral of such will be an infinite string of ones since the functions it contains are f_i(x) and these are Pasquali patches (Claim 2) and there are no repetitions.  Such infinite sum of ones clearly diverges.

Remark 1.  We can view what we have learned in the claims from a slightly different vantage point.  Create the infinite identity matrix

 I = \left[ \begin{array}{cccc} 1 & 0 & 0 & \ldots \\ 0 & 1 & 0 & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right]

Next create the following polynomial differential vector

 D =\left[ \begin{array}{c} 1 \\ 2x \\ 3x^2 \\ \vdots \end{array} \right]

It is clear that

 \int_0^1 I_i \cdot D \, dx =1

for all rows  i of  I .  We can omit the little  i because this definition applies to all rows and:

 \int_0^1 I \cdot D \, dx = \int_0^1 D \, dx= \left[ \begin{array}{c} 1 \\ 1 \\ \vdots \end{array} \right] = \bf{1}

This of course summarizes Claims 1 and 2.  Next, define the matrix J consisting of rows which are finite sums of rows of I (so that each row of J consists of a finite number of ones at any position, namely n such coming from n picked rows of I).  Claims 3 and 4 are summarized in the statement  

 \int_0^1 J\cdot D \, dx = S

where S is the vector consisting of the sum of the rows of J, which, since it is made up of a finite number of ones at each row, adds up to a constant integer at each row:

S = \left[ \begin{array}{c} n_1 \\ n_2 \\ \vdots \end{array} \right]

 Finally, the corollaries can be summarized in the statement in which we create a matrix  K consisting of rows with a finite number of zeroes (and an infinite number of ones) or an infinite number of zeroes but an infinite number of ones as well.  It is clear then that

 \int_0^1 K\cdot D \, dx = \infty

Remark 2. The cool thing about this notation is that it gives us power to conclude several interesting things.  For example, scaling of matrices  I and  J as by a constant  t shows convergence at the integral in the interval  \left[ 0,1 \right] of every one of the scaled sums  represented by the rows of such matrices.  Thus:

Corollary 4. Let  I^* = t \cdot I and J^* = t \cdot J with  t is a scaling factor.  Then the area of each of the infinitely many polynomials represented by the matrices I^*, J^* dot D in the interval from 0 to 1 converge.

Proof.  On the one hand, we have 

 \int_0^1 I^* \cdot D \, dx = \int_0^1 t \cdot I \cdot D \, dx =t \left( \int_0^1 I \cdot D \, dx \right) = \bf{t}

 On the other hand,

 \int_0^1 J^* \cdot D \, dx =\int_0^1 t \cdot J\cdot D \, dx = t \left( \int_0^1 J\cdot D \, dx \right) = t \cdot S =\left[ \begin{array}{c} t \cdot n_1 \\ t \cdot n_2 \\ \vdots \end{array} \right]

Remark 3. Next consider the infinite-matrix formed by convergent sequences (at the sum) at each row,

A = \left[ \begin{array}{cccc} \vdots & \vdots & \vdots & \vdots \\ 1 & \frac{1}{2^2} & \frac{1}{3^2} & \ldots \\ \vdots & \vdots & \vdots & \vdots \end{array} \right]

Depicted is the reciprocals of squares which we know converges at the sum (Basel problem), simply for illustration, but all convergent sequences would be in the ith row of A.  We have 

 \int_0^1 A_i\cdot D \, dx = \sum_j a_{i,j}

is convergent by definition.  The cool thing is we can easily prove in one swoop that all sequences that are scaled will also converge at the sum (and the infinite polynomials with coefficients A \cdot D have converging area in the interval from 0 to 1).

Corollary 5. Let  A^* = t \cdot A with  t is a scaling factor.  Then the area of each of the infinitely many polynomials represented by the matrix entries of A^* \cdot D in the interval from 0 to 1 converge.

Proof.  We have 

 \int_0^1 A^*_i\cdot D \, dx = \sum_j a_{i,j}

for all i, so this equals 

 \int_0^1 t \cdot A_i\cdot D \, dx = t \left(\int_0^1 A_i\cdot D \, dx \right) = t \cdot\sum_j a_{i,j}

for all i.

All of these small and obvious observations lead to this:

Claim 5. The Grand Classification Theorem of Limiting Surfaces (A General and Absolutely Complete Classification of Pasquali patches which are functions of x alone).  All Pasquali patches which are functions of x alone (and therefore possible limiting surfaces) take the form

 p(x) = \frac{A_i \cdot D}{\sum_j a_{i,j}}

Proof. We have that, since such  p(x) is a Pasquali patch, it must conform to the definition.  Thus 

 \int_0^1 p(x) \, dx = \int_0^1 \frac{A_i \cdot D}{\sum_j a_{i,j}} \, dx = \frac{\int_0^1 A_i \cdot D \, dx}{\sum_j a_{i,j}} = \frac{\sum_j a_{i,j}}{\sum_j a_{i,j}} = 1

shows this is indeed the case.  To show that "all" Pasquali patches that are functions of x alone are of the form of p(x), we argue by contradiction.  Suppose that there is a Pasquali patch that is a function of x alone which does not take the form of p(x).  It couldn't possibly be one such that is a finite polynomial, since  A_i was defined to be that matrix formed by all convergent sequences at the sum at each row and it can be scaled any which way we like, and this includes sequences with a finite number of nonzero coefficients.  But now it couldn't be any infinite polynomial either, by the same definition of  A_i which includes infinite sequences so that \sum_j a_{i,j} is convergent.  Thus it must be a polynomial formed by dotting divergent sequences (at the sum), but all such have been happily excluded from the definition of A.

Remark 4.  Thus, EVERY convergent series has an associated Pasquali patch (which is solely a function of  x), and vice versa, covering the totality of the Pasquali patch functions of x universe and the convergent series universe bijectively.

Remark 5.  Notice how the definition takes into account Taylor polynomial coefficients (thus all analytic functions are included) and those that are not (even those that are as yet unclassified), and all sequences which may be scaled by a factor as well.

Claim 6. Let f(x) is Maclaurin-expandable so that

 f(x) = \sum_{n=0}^\infty \frac{f^n(0) x^n}{n!}

Then

 \sum_{n=0}^\infty\frac{f^n(0)}{(n+1)!} = \int_0^1 f(x) \, dx

Proof.  

\int_0^1 f(x) \, dx = \int_0^1 A_i \cdot D \, dx

for some i row of A.  Such a row would have to be of form

 A_i = \left[ \begin{array}{cccc} f(0) & \ldots & \frac{f^n(0)}{n! (n+1)} & \ldots \end{array} \right]

 Then the integral

\int_0^1 A_i \cdot D \, dx = \sum_j a_{i,j} =\sum_{n=0}^\infty \frac{f^n(0)}{n! (n+1)} = \sum_{n=0}^\infty \frac{f^n(0)}{(n+1)!}

Remark 6. Notice that all Maclaurin-expandable functions converge in area (have stable area) in the interval from 0 to 1, a remarkable fact.

Example 1.  Take

f(x) = e^x = \sum_{n=0}^\infty \frac{x^n}{n!}

 By applying Claim 6, it follows that

 \sum_{n=0}^\infty \frac{1}{(n+1)!} = \int_0^1 e^x \, dx = e - 1

Remark 7. Now we have a happy way to construct (any and all) Pasquali patches which are functions of x alone, merely by taking a sequence which is convergent at the sum.

Remark 8. Quantum mechanically, we now know all possible shapes that a stationary (limiting) eigen wavevector can take.

Remark 9. This gives us extraordinary power to calculate convergent sums via integration, as the next examples show.  It also gives us extraordinary power to express any number as an infinite sum, for example.

 

Compendium of Claims and Proofs, Including New Ones, Part I

December 3rd, 2012 No comments

I've condensed this exceptional mathematical wisdom here, which is still transforming as I organize and jot down ideas.

Part I v16 (latest, but very unorganized after @Dynamics)

Part I v15

Part I v14

Part I v13

Part I v12

Part I v11

Part I v10

Part I v9

Part I v8

Part I v7

Part I v6

Part I v5

Part I v4

Part I v3

On Eigen(patch(ix))values, II - (RWLA,MCT,GT,AM Part IX)

March 22nd, 2011 No comments

So remember my little conjecture from last time, that the number of patch(ix) (kernel) eigenvalues would depend on the number of x terms that composed it?  I started working it out by writing all expressions and trying to substitute them and I got sums of sums of sums and it became nightmarish, and since math is supposed to be elegant, I opted for a different track.  A little proposition did result, but I'm not sure yet if it means what I want it to mean. Haha.

If you recall, last time we figured that

 B_1 = \frac{B_2 \sum_{i=0}^\infty f_2^i(1-y)G_1^{i+1}(y)\vert_0^1}{\lambda - \sum_{i=0}^\infty f_1^i(1-y)G_1^{i+1}(y)\vert_0^1}

and

 B_2 = \frac{B_1 \sum_{i=0}^\infty f_1^i(1-y) G_2^{i+1}(y)\vert_0^1}{\lambda - \sum_{i=0}^\infty f_2^i(1-y) G_2^{i+1}(y)\vert_0^1}

Let's rename the sums by indexing over the subscripts, so that

 \begin{array}{ccc} C_{1,1} & = &\sum_{i=0}^\infty f_1^i(1-y)G_1^{i+1}(y)\vert_0^1 \\ C_{1,2} & = &\sum_{i=0}^\infty f_1^i(1-y) G_2^{i+1}(y)\vert_0^1 \\ C_{2,1} & = &\sum_{i=0}^\infty f_2^i(1-y)G_1^{i+1}(y)\vert_0^1 \\ C_{2,2} & = &\sum_{i=0}^\infty f_2^i(1-y) G_2^{i+1}(y)\vert_0^1 \end{array}

Renaming therefore the constants we get:

 B_1 = \frac{B_2 C_{2,1}}{\lambda - C_{1,1}}

and

 B_2 = \frac{B_1 C_{1,2}}{\lambda - C_{2,2}}

Last time we substituted one equation into the other to figure additional restrictions on  \lambda .  A faster way to do this is to  notice:

 \left( \lambda - C_{1,1} \right)B_1 = B_2 C_{2,1}

and

 \left( \lambda - C_{2,2} \right) B_2 = B_1 C_{1,2}

If we multiply these two expressions we get

 \left( \lambda - C_{1,1} \right)\left( \lambda - C_{2,2} \right) B_1 B_2 = B_1 B_2 C_{1,2} C_{2,1}

Finally, dividing out both  B_1, B_2 we arrive at the quadratic expression on  \lambda of before:

 \left( \lambda - C_{1,1} \right)\left( \lambda - C_{2,2} \right) = C_{1,2} C_{2,1}

Now.  Let's posit that, instead of  a(x) = B_1 f_1(x) + B_2 f_2(x) we have  a^*(x) = B_1 f_1(x) + B_3 f_3(x) .  Then by all the same arguments we should have an expression of  B_1 that is the same, and an expression of  B_3 that is:

 \left( \lambda - C_{3,3} \right) B_3 = B_1 C_{1,3}

with the similar implication that

 \left( \lambda - C_{1,1} \right)\left( \lambda - C_{3,3} \right) = C_{1,3} C_{3,1}

An  a^{**}(x) = B_2 f_2(x) + B_3 f_3(x) would give the implication

 \left( \lambda - C_{2,2} \right)\left( \lambda - C_{3,3} \right) = C_{2,3} C_{3,2}

If we are to multiply all similar expressions, we get

 \left( \lambda - C_{1,1} \right)^2\left( \lambda - C_{2,2} \right)^2 \left( \lambda - C_{3,3} \right)^2 = C_{1,2} C_{2,1}C_{1,3} C_{3,1}C_{2,3} C_{3,2}

or

 \left( \lambda - C_{1,1} \right) \left( \lambda - C_{2,2} \right) \left( \lambda - C_{3,3} \right) = \sqrt{C_{1,2} C_{2,1}C_{1,3} C_{3,1}C_{2,3} C_{3,2}}

In other words, we want to make a pairwise argument to obtain the product of the  \lambda -expressions and a polynomial in  \lambda .  Next I'd like to show the proposition:

 \left( \lambda - C_{1,1}\right) \cdot \left( \lambda - C_{2,2} \right) \cdot \ldots \cdot \left( \lambda - C_{n,n} \right) = \sqrt[n-1]{\prod_{\forall i, \forall j, i \neq j}^n C_{i,j}}

and for this I want to begin with a combinatorial argument.  On the left hand side, the number of pairwise comparisons we can make depends on the number of  \lambda factors of the  \lambda polynomial (or, the highest degree of the  \lambda polynomial).  That is to say, we can make  \binom{n}{2} pairwise comparisons, or  \frac{n!}{(n-2)!2!} = \frac{n (n-1)}{2} comparisons.  Now, I don't know whether anyone has ever noticed this, but this last simplified part looks exceptionally like Gauss's sum of consecutive integers (pyramidal series), so in other words, this last part is in effect  \sum_{i=1}^{n-1} i which I find very cool, because we have just shown, quite accidentally, the equivalence:

 \binom{n}{2} = \binom{n}{n-2} = \sum_{i=1}^{n-1} i

The way I actually figured this out is by noticing that, in our pairwise comparisons, say for the 3rd-degree-polynomial-in- \lambda case, by writing the pairwise comparisons first of the  (\lambda - C_{1,1}) products, then of the  (\lambda - C_{2,2}) (in other words, ordering logically all  \binom{3}{2} products), there were 2 of the first and 1 of the second (and none of the  (\lambda - C_{3,3}) ).  If we do the same for the 4th-degree, there are 3 of the  (\lambda - C_{1,1}) , 2 of the  (\lambda - C_{2,2}) , and 1 of the  (\lambda - C_{3,3}) , with none of the  (\lambda - C_{4,4}) .  In other words, the  \binom{4}{2} pair-products could be written as the sum of the cardinality of the groupings:  3 + 2 + 1 .

Now Gauss's sum of integers formula is already known to work in the general case (just use an inductive proof, e.g.), so the substitution of it into the binomial equivalence needs no further elaboration: it generalizes automatically for all  n .

So if we are to multiply all pairwise comparisons, notice there will be  n - 1 products of each  \lambda -factor: there are  n - 1 products belonging to the  (\lambda - C_{1,1}) grouping (because this first grouping has n-1 entries, from the Gauss formula equivalence), there are  n - 2 products belonging to the  (\lambda - C_{2,2}) PLUS the one already counted in the  (\lambda - C_{1,1}) grouping, for a total of, again,  n - 1 .  The  kth grouping  (\lambda - C_{k,k}) has  n - k products listed for itself PLUS one for each of the previous k - 1 groupings, for a total of  n - k + k - 1 = n - 1, and the k+1th grouping  (\lambda - C_{k+1, k+1}) has  n - (k+1) products listed for itself PLUS one for each of the previous  k groupings, for a total of n - (k+1) + k = n - 1.  We are left in effect with:

 (\lambda - C_{1,1})^{n-1} \cdot(\lambda - C_{2,2})^{n-1} \cdot \ldots \cdot (\lambda - C_{n,n})^{n-1}

The right hand side of each pairwise comparison was nothing more than the simple product on the cross indexes of  C , so it's not difficult to argue then that, if we multiply  \binom{n}{2} such pairs, we get  \prod_{\forall i, \forall j, i \neq j}^n C_{i,j} .   We then take the  n-1 th root on both sides of the equation.

Since the  n + 1 case follows the same basic structure of the argument, we are done with proving our proposition.

What I want this proposition to mean may be very different than what it actually is, I'm hopeful nevertheless but I agree that it requires a bit of further investigation.  As I hinted before, I would like that

 \left( \lambda - C_{1,1}\right) \cdot \left( \lambda - C_{2,2} \right) \cdot \ldots \cdot \left( \lambda - C_{n,n} \right) = \sqrt[n-1]{\prod_{\forall i, \forall j, i \neq j}^n C_{i,j}}

with, for example,  n = 3 represent the constraint on the eigen(patch(ix))values of  a^\circ = B_1 f_1(x) + B_2 f_2(x) + B_3 f_3(x) or, if not that, maybe  a^\circ_\star = a(x) + a^*(x) + a^{**}(x) = 2B_1 f_1(x) + 2 B_2 f_2(x) + 2 B_3 f_3(x) , which brings into question the superposition of functions and their effect on the eigenvalues.  I may be wildly speculating, but hey!  I don't really know better!  I'll do a few experiments and see what shows.

On Eigen(patch(ix))values - (RWLA,MCT,GT,AM Part VIII)

March 16th, 2011 No comments

So in the continuation of this series, I have been thinking long and hard about the curious property of the existence of eigen(patch(ix))values that I have talked about in a previous post.  I began to question whether such eigen(patch(ix))values are limited to a finite set (much as in finite matrices) or whether there was some other fundamental insight, like, if 1 is an eigen(patch(ix))value, then all elements of  \mathbb{R} are too (or all of  \mathbb{R} minus a finite set).  In my latest attempt to understand this, the question comes down to, using the "star" operator, whether

 a(x) \star p(x,y) = \lambda a(x)

has discrete values of  \lambda or, "what values can lambda take for the equation to be true," in direct analogy with eigenvalues when we're dealing with discrete matrices.  I am not using yet "integral transform notation" because this development seemed more intuitive to me, and thus I'm also limiting the treatment to "surfaces" that are smooth and defined on  [0,1] \times [0,1] , like I first thought of them. Thus, the above equation translates to:

 \int_0^1 a(1-y) p(x,y) dy = \lambda a(x)

and, if we recall our construction of the patch (or patchix if we relax the assumption that integrating with respect to x is 1)  p(x,y) = f_1(x) g_1(y) + f_2(x) g_2(y) :

 \begin{array}{ccc} \lambda a(x) & = &\int_0^1 a(1-y) \left(f_1(x) g_1(y) + f_2(x) g_2(y) \right) dy \\ & = & f_1(x) \int_0^1 a(1-y) g_1(y) dy + f_2(x) \int_0^1 a(1-y) g_2(y) dy \\ & = & B_1 f_1(x) + B_2 f_2(x) \end{array}

where  B_1, B_2 are constants.  It is very tempting to divide  \lambda as

 a(x) = \frac{B_1}{\lambda} f_1(x) + \frac{B_2}{\lambda} f_2(x)

must hold provided  \lambda \neq 0 .  So we have excluded an eigen(patch(ix))value right from the start, which is interesting.

We can systematically write the derivatives of  a(x) , as we're going to need them if we follow the algorithm I delineated in one of my previous posts (NB: we assume a finite number of derivatives or periodic ones, or infinite derivatives such that the subsequent sums we'll write are convergent):

 \begin{array}{ccc} a(x) & = & \frac{B_1}{\lambda} f_1(x) + \frac{B_2}{\lambda} f_2(x) \\ a'(x) & = & \frac{B_1}{\lambda} f'_1(x) + \frac{B_2}{\lambda} f'_2(x) \\ a''(x) & = & \frac{B_1}{\lambda} f''_1(x) + \frac{B_2}{\lambda} f''_2(x) \\ \vdots & \vdots & \vdots \\ a^k(x) & = & \frac{B_1}{\lambda} f^k_1(x) + \frac{B_2}{\lambda} f^k_2(x) \\ \vdots & \vdots & \vdots \end{array}

provided, as before,  \lambda \neq 0 .  We want to calculate the constants  B_1, B_2 , to see if they are restricted in some way by a formula, and we do this by integrating by parts as we did in a previous post to obtain the cool "pasquali series." Thus, we have that if  B_1 = \int_0^1 a(1-y) g_1(y) dy , the tabular method gives:

 \begin{array}{ccccc} \vert & Derivatives & \vert & Integrals & \vert \\ \vert & a(1-y) & \vert & g_1(y) & \vert \\ \vert & -a'(1-y) & \vert & G_1^1(y) & \vert \\ \vert & a''(1-y) & \vert & G_1^2(y) & \vert \\ \vert & \vdots & \vert & \vdots & \vert \end{array}

and so,

 \begin{array}{ccc} B_1 & = & \int_0^1 a(1-y) g_1(y) dy \\ & = & a(1-y) G_1^1(y) \vert_0^1 + a'(1-y) G_1^2(y) \vert_0^1 + \ldots \\ & = & \sum_{i = 0}^\infty a^i(1-y) G_1^{i + 1} \vert_0^1 \end{array}

if we remember the alternating sign of the multiplications, and we are allowed some leeway in notation.  Ultimately, this last bit means:  \sum_{i=0}^\infty a^i(0) G_1^{i+1}(1) - \sum_{i=0}^\infty a^i(1) G_1^{i+1}(0) .

Since we have already explicitly written the derivatives of  a(x) , the  a^i(0), a^i(1) derivatives can be written as  \frac{B_1}{\lambda} f_1^i(0) + \frac{B_2}{\lambda} f_2^i(0) and  \frac{B_1}{\lambda} f_1^i(1) + \frac{B_2}{\lambda} f_2^i(1) respectively.

We have then:

 B_1 = \sum_{i=0}^\infty \left( \frac{B_1}{\lambda} f_1^i(0) + \frac{B_2}{\lambda} f_2^i(0) \right) G_1^{i+1}(1) - \sum_{i=0}^\infty \left( \frac{B_1}{\lambda} f_1^i(1) + \frac{B_2}{\lambda} f_2^i(1) \right) G_1^{i+1}(0)

Since we aim to solve for  B_1 , multiplying by  \lambda makes things easier, and also we must rearrange all elements with  B_1 in them, so we get:

 \lambda B_1 = B_1 \sum_{i=0}^\infty \left( f_1^i(0) G_1^{i+1}(1) - f_1^i(1) G_1^{i+1}(0) \right) + B_2 \sum_{i=0}^\infty \left( f_2^i(0) G_1^{i+1}(1) - f_2^i(1) G_1^{i+1}(0) \right)

Subtracting both sides the common term and factoring the constant we endeavor to solve for, we get:

 \left( \lambda - \sum_{i=0}^\infty \left( f_1^i(0) G_1^{i+1}(1) - f_1(1) G_1^{i+1}(0) \right) \right) B_1 = B_2 \sum_{i=0}^\infty \left(f_2^i(0) G_1^{i+1}(1) - f_2^i(1) G_1^{i+1}(0) \right)

or

 B_1 = \frac{B_2 \sum_{i=0}^\infty f_2^i(1-y) G_1^{i+1}(y) \vert_0^1}{\lambda - \sum_{i=0}^\infty f_1^i(1-y) G_1^{i+1}(y) \vert_0^1} = \frac{B_2 D}{\lambda - C}

A similar argument for  B_2 suggests

 B_2 = \frac{B_1 \sum_{i=0}^\infty f_1^i(1-y) G_2^{i+1}(y) \vert_0^1}{\lambda - \sum_{i=0}^\infty f_2^i(1-y) G_2^{i+1}(y) \vert_0^1} = \frac{B_1 E}{\lambda - F}

where the new constants introduced emphasizes the expectation that the sums converge.  Plugging in the one into the other we get:

 B_1 = \frac{\left( \frac{B_1 E}{\lambda - F} \right) D}{\lambda - C} = \frac{B_1 E D}{(\lambda - F) (\lambda - C)}

and now we seem to have additional restrictions on lambda:  \lambda \neq F and  \lambda \neq C .  Furthermore, the constant  B_1 drops out of the equation, suggesting these constants can be anything we can imagine (all of  \mathbb{R} without restriction), but then we have the constraint:

 (\lambda - F)(\lambda - C) = ED

which is extraordinarily similar to its analogue in finite matrix or linear algebra contexts.  Expanding suggests:

 \lambda^2 - (F + C) \lambda + (CF - ED) = 0

which we can solve by the quadratic equation of course, as:

 \lambda_{1,2} = \frac{(F + C) \pm \sqrt{(F-C)^2 + 4ED} }{2}

So not only is  \lambda not equal to a few values, it is incredibly restricted to two of them.

So here's a sort of conjecture, and a plan for the proof.  The allowable values of  \lambda is equal to the number of x terms  a(x) (or  p(x,y) ) carries.  We have already shown the base case, we need only show the induction step, that it works for  k and  k+1 terms.

On Patch(ix)es as Kernels of Integral Transforms (RWLA,MCT,GT,AM Part VII)

February 7th, 2011 No comments

[This post is ongoing, as I think of a few things I will write them down too]

So just a couple of days ago I was asked by a student to give a class on DEs using Laplace transforms, and it was in my research that I realized that what I've been describing by converting a probability distribution on [0,1] to another is in effect a transform (minus the transform pair, which was unclear to me how to obtain, corresponding perhaps to inverting the patch(ix)).  The general form of integral transforms is, according to my book Advanced Engineering, 2nd ed., by Michael Greenberg p. 247:

 F(s) = \int_a^b f(t) K(t,s) dt , where  K(t,s) is called the kernel of the transform, and looks an awful lot like a function by patch(ix) "multiplication," which I described as:

 b(x) = \int_0^1 a(1-y) p(x,y) dy you may recall.  In the former context  p(x,y) looks like a kernel, but here  a(1-y) is a function of  y than of  x , and I sum across  y .  To rewrite patch(ix)-multiplication as an integral transform, it would seem we need to rethink the patch position on the xy plane, but it seems easy to do (and we do in number 1 below!).

In this post I want to (eventually be able to):

1. Formally rewrite my function-by-patch(ix) multiplication as a "Pasquali" integral transform.

If we are to modify patch multiplication to match the integral transform guideline, simply think of  p(t,s) as oriented a bit differently, yielding the fact that  \int_0^1 p(t,s) ds = 1 for any choice of  t .  Then, for a probability distribution  b(t) in [0,1], the integral transform is  B(s) = \int_0^1 b(t) p(t,s) dt .  Now  p(t,s) is indeed then a kernel.

2. Extend a function-by-patch multiplication to probability distributions and patches on all  \mathbb{R} and  \mathbb{R}^2 , respectively.

When I began thinking about probability distributions, I restricted them to the interval [0,1] and a patch on  [0,1] \times [0,1] , to try to obtain a strict analogy of (continuous) matrices with discrete matrices.  I had been thinking for a while that this need not be the case, but when I glanced at the discussion of integral transforms on my Greenberg book, and particularly the one on the Laplace transform, I realized I could have done it right away.  Thus, we can redefine patch multiplication as

 B(s) = \int_{-\infty}^{\infty} b(t) p(t,s) dt

with

 \int_{-\infty}^{\infty} p(t,s) ds = 1

3. Explore the possibility of an inverse-patch via studying inverse-transforms.

3a. Write the patch-inverse-patch relation as a transform pair.

4. Take a hint from the Laplace and Fourier transforms to see what new insights can be obtained on patch(ix)es (or vice-versa).

Vice-versa: Well one of the things we realize first and foremost, is that integral transforms are really an extension of the concept of matrix multiplication: if we create a matrix "surface" and multiply it by a "function" vector we obtain another "function," and the kernel (truly a continuous matrix) is exactly our path connecting the two.  Can we not think now of discrete matrices (finite, infinite) as "samplings" of such surfaces?  I think so.  We can also combine kernels with kernels (as I have done in previous posts) much as we can combine matrices with matrices.  I haven't really seen a discussion exploring this in books, which is perhaps a bit surprising.  At any rate, recasting this "combination" shouldn't be much of a problem, and the theorems I proved in previous posts should still hold, because the new notation represents rigid motions of the kernel, yielding new kernel spaces that are isomorphic to the original.