On Riding the Wave

So here it is, the pinnacle of my research effort thus far. I'll start by the definitions:

Definition 1. Let f(x,y) and g(x,y) be surfaces so that f,g \colon [0,1] \times [0,1] \to \mathbb{R}. The star operator \star \colon [0,1]^2 \times [0,1]^2 \to [0,1]^2 takes two surfaces and creates another in the following way:

 \left( f(x,y), g(x,y) \right) \rightsquigarrow \left( f(1-y, z), g(x,y) \right) \rightsquigarrow h(x,z) \rightsquigarrow h(x,y)

with the central transformation being defined by \diamond \colon [0,1]^2 \times [0,1]^2 \to [0,1]^2

 f(1-y,z) \diamond g(x,y) = \int_{0}^{1} f(1-y,z) g(x,y) \, dy = h(x,z)

and the last transformation that takes h(x,z) \rightsquigarrow h(x,y) we will call j \colon [0,1]^2 \to [0,1]^2. Thus

 \boxed{f(x,y) \star g(x,y) = j \left( f(1-y, z) \diamond g(x,y) \right) = j \left( \int_0^1 f(1-y,z) g(x,y) \, dy \right)}

Definition 2. Define a continuous, bounded surface p(x,y), with p \colon [ 0, 1 ] \times [ 0, 1 ] \to \mathbb{R}^+ \cup \{0\} ,  and let  \int_0^1 p(x,y) \, dx = 1 be true regardless of the value of  y .    In other words, integrating such surface with respect to x yields the uniform  probability distribution u(y), u \colon [0,1] \to \{1\} . We will call this a strict Pasquali patch, and such is intimately related to probability notions.  With p \colon [ 0, 1 ] \times [ 0, 1 ] \to \mathbb{R}, we have a more general definition for a Pasquali patch.

Construction 1. Let p(x,y) = \sum_{i = 1}^n f_i(x) \cdot g_i(y) = \mathbf{f}(x) \cdot \mathbf{g}(y), a function which consists of a finite sum of pairs of functions of x and y. In the spirit of conciseness, we omit the transpose symbology, thusly understanding the first vector listed in the dot product as a row vector and the second vector as a column vector.  Then p(x,y) is a Pasquali patch provided

g_n(y) = \frac{1- \sum_{i = 1}^{n-1} g_i(y) F_i}{F_n} = \frac{1 - \mathbf{g}_{n-1}(y) \cdot \mathbf{F}_{n-1}}{F_n}

and F_n \neq 0. Thus, we may choose n-1 arbitrary functions of x,  n-1 arbitrary functions of y, an nth function of x so that F_n \neq 0, and

p(x,y) = \sum_{i = 1}^{n-1} f_i(x) \cdot g_i(y) + f_n(x) \cdot \frac{1 - \sum_{i = 1}^{n-1} g_i(y) F_i}{F_n} = \mathbf{f}_{n-1}(x) \cdot \mathbf{g}_{n-1}(y) + f_n(x) \cdot \frac{1 - \mathbf{g}_{n-1}(y) \cdot \mathbf{F}_{n-1}}{F_n}

 We may write the normalized version as:

 \boxed{ p(x,y) = \left( \mathbf{f}_{n-1}(x) - f_n^*(x) \cdot \mathbf{F}_{n-1} \right) \cdot \mathbf{g}_{n-1}(y) + f_n^*(x)}

and again observe that the unit contribution to the integral of the Pasquali patch is provided by f_n^*(x), so that F_n^* = 1.

Claim 1. Pasquali patches constructed as by Construction 1 are closed.

Proof. To make the proof clear, let us relabel the normalized version of Construction 1 as

 p(x,y) = \overbrace{\left( \mathbf{f}_{n-1}(x) - f_n^*(x) \cdot \mathbf{F}_{n-1} \right)}^{\mathbf{P}_x(x)} \cdot \overbrace{\mathbf{g}_{n-1}(y)}^{\mathbf{P}_y(y)} + f_n^*(x)

with  \int_0^1 \mathbf{P}_x(x) \, dx = 0 so as to manipulate the equation more simply, and

q(x,y) = \mathbf{Q}_x(x) \cdot \mathbf{Q}_y(y) + h_n^*(x)

with F_n^* = 1 and H_n^* = 1. Then

 \begin{array}{ccc} p(x,y) \star q(x,y) & = & j \left( \int_0^1 \left( \mathbf{P}_x(1-y) \cdot \mathbf{P}_y(t) + f_n^*(1-y) \right) \cdot \left( \mathbf{Q}_x(x) \cdot \mathbf{Q}_y(y) + h_n^*(x) \right) \, dy \right) \\ & = & \left[ \alpha \cdot \mathbf{Q}_x(x) + 0 \cdot \{\beta \cdot h_n^*(x)\} \right] \cdot \mathbf{P}_y(y) + \gamma \cdot \mathbf{Q}_x(x) \cdot \mathbf{1} + h_n^*(x) \end{array}

with \alpha = \int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{Q}_y(y) \, dy, \beta = \int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{1} \, dy = 0, and \gamma = \int_0^1 f_n^*(1-y) \cdot \mathbf{Q}_y(y) \, dy . We're not too concerned of the form of the resultant star product as much as its structure. Observe

 r(x,y) = \overbrace{ \alpha \cdot \mathbf{Q}_x(x)}^{\mathbf{R}_x^a(x)} \cdot \overbrace{\mathbf{P}_y(y)}^{\mathbf{R}_y^a(y)} + \overbrace{\gamma \cdot \mathbf{Q}_x(x)}^{\mathbf{R}^b_x(x)} \cdot \overbrace{\mathbf{1}}^{\mathbf{R}_y^b(y)} + h_n^*(x)

can be folded back into function vectors \mathbf{R}_x(x) and \mathbf{R}_y(y). Thus the structure of Construction 1 functions is preserved when we multiply one by another, showing closure. Of course the property of Construction 1 being Pasquali patches means r(x,y) is closed under that property, and so is a Pasquali patch also, as can be seen when we integrate across x:

 \int_0^1 r(x,y) \, dx = \int_0^1 0 \cdot \{ \alpha \cdot \mathbf{Q}_x(x) \cdot \mathbf{P}_y(y) \} + 0 \cdot \{\gamma \cdot \mathbf{Q}_x(x) \cdot \mathbf{1} \} + h_n^*(x) \, dx = 1

and the unit contribution is given by h_n^*(x). \qed

Claim 2. Pasquali patches constructed as by Construction 1 have powers:

 p_n(x,y) = \alpha^{n-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \sum_{i=0}^{n-2} \alpha^{i} \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x)

with

 \alpha = \int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{P}_y(y) \, dy = \mathbf{P}_x(x) \star \mathbf{P}_y(y) = \mathrm{str} \left[ \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) \right]

and

 \gamma = \int_0^1 f_n^*(1-y) \cdot \mathbf{P}_y(y) \, dy = f_n^*(x) \star \mathbf{P}_y(y)

Proof by Induction. First, using the formula observe

p(x,y) = \alpha^0 \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + f_n(x)

and the second power

 p_2(x,y) = \alpha \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \gamma \cdot \mathbf{P}_x(x) + f_n(x)

which is exactly what we expect from the definition of Construction 1 and Claim 1. Next, let us assume that the formula works and

 p_k(x,y) = \alpha^{k-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \sum_{i=0}^{k-2} \alpha^i \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x)

Let us examine p_{k+1}(x,y) = p_k(x,y) \star p(x,y)

 p_{k+1}(x,y) = j \left( \int_0^1 \left( \alpha^{k-1} \cdot \mathbf{P}_x(1-y) \cdot \mathbf{P}_y(t) + \sum_{i=0}^{k-2} \alpha^i \cdot \gamma \cdot \mathbf{P}_x(1-y) \cdot \mathbf{1} + f_n^*(1-y) \right) \cdot \left( \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + f_n^*(x) \right) \, dy \right)

term by term upon dotting. The first dot the first term is:

 \alpha^{k-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) \cdot \underbrace{\int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{P}_y(y) \, dy}_\alpha = \alpha^k \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y)

The first dot the last term is:

 \alpha^{k-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) \cdot f_n^*(x) \cdot \underbrace{\int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{1} \, dy}_\beta = 0 \cdot \{ \alpha^{k-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) \cdot f_n^*(x) \cdot 0 \}

The last dot the first term is:

 \mathbf{P}_x(x) \cdot \int_0^1 f_n^*(1-y) \cdot \mathbf{P}_y(y) \, dy = \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1}

The last dot the last term is:

 f_n^*(x) \cdot \int_0^1 f_n^*(1-y) \, dy = f_n^*(x)

The middle dot the first term is:

 \sum_{i=0}^{k-2} \alpha^i \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \underbrace{\int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{P}_y(y) \, dy}_\alpha = \sum_{i=0}^{k-2} \alpha^{i+1} \cdot C \cdot \mathbf{P}_x(x) \cdot \mathbf{1}

Finally, the middle dot the last term vanishes:

\sum_{i=0}^{k-2} \alpha^{i} \cdot \gamma \cdot f_n^*(x) \cdot \underbrace{\int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{1} \, dy}_\beta = 0 \cdot \{ \sum_{i=0}^{k-2} \alpha^{i} \cdot \gamma \cdot f_n^*(x) \cdot 0 \}

Putting all this information together we get:

\begin{array}{ccc} p_{k+1}(x,y) & = & \alpha^k \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + \sum_{i=0}^{k-2} \alpha^{i+1} \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x) \\ & = & \alpha^k \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \left( \sum_{i=0}^{k-2} \alpha^{i+1} + 1 \right) \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x) \\ & = & \alpha^k \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \sum_{i=0}^{k-1} \alpha^{i} \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x) \end{array}


\qed

Claim 3. It follows that

 p_\infty(x) = \frac{\gamma}{1-\alpha} \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x)

and  \lvert \alpha \rvert < 1, \gamma bounded, both conditions necessary and sufficient to establish that such a limiting surface indeed exists (convergence criterion). Furthermore, we check that this is indeed a Pasquali patch.

Proof. To reach a steady state limit,

 p_\infty(x) = \lim_{n \to \infty} p_n(x,y) = \lim_{n \to \infty} \left[ \alpha^{n-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \sum_{i=0}^{n-2} \alpha^{i} \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x) \right]

Next, the steady state limit must be solely functions of x, so the functions of y must vanish at the limit. Thus, it follows that  \lvert \alpha \rvert < 1. We have now established bounds on \alpha, which happen to be exactly the radius of convergence of the geometric series:

 \sum_{i=0}^\infty \alpha^i = \frac{1}{1-\alpha}

and

p_\infty(x) = 0 \cdot \{ \lim_{n \to \infty} \left[ \alpha^{n-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) \right] \} + \lim_{n \to \infty} \left[ \sum_{i=0}^{n-2} \alpha^{i} \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} \right] + f_n^*(x)

gives the desired result:

p_\infty(x) = \frac{\gamma}{1-\alpha} \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x)

As a check, we integrate across x to corroborate the definition of Pasquali patch:

 \int_0^1 p_\infty(x) \, dx = 0 \cdot \{ \frac{\gamma}{1-\alpha} \cdot \int_0^1 \mathbf{P}_x(x) \cdot \mathbf{1} \, dx \} + \int_0^1 f_n^*(x) \, dx = 1

Claim 4. 

 p_\infty(x) = \frac{\gamma}{1-\alpha} \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x)

is the eigenfunction corresponding to eigenvalue \lambda = 1 of all Construction 1 functions, through each power independently.

Proof. An eigenfunction  e(x) has the property

e(x) \star h(x,y) = \lambda e(x)

where the eigenfunction's   corresponding eigenvalue is \lambda.  The claim is more ambitious, and we will show that  p_\infty(x) \star p_n(x,y) = 1 \cdot p_\infty(x) for any n \in \mathbb{Z}^+.  The left-hand side is

 j \left( \int_0^1 \left( \frac{\gamma}{1-\alpha} \cdot \mathbf{P}_x(1-y) \cdot \mathbf{1} + f_n^*(1-y) \right) \cdot \left( \alpha^{n-1} \cdot \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + \sum_{i=0}^{n-2} \alpha^{i} \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x) \right) \, dy \right)

Observe the first term dotted with the middle and last term produce \beta which annihilates the results, so that the only relevant term is the first dot the first:

 \frac{\gamma}{1-\alpha} \cdot \alpha^{n-1} \cdot \mathbf{P}_x(x) \cdot \underbrace{\int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{P}_y(y) \, dy}_\alpha = \frac{\gamma}{1-\alpha} \cdot \alpha^{n} \cdot \mathbf{P}_x(x) \cdot \mathbf{1}

 The second term dot the first produces:

 \alpha^{n-1} \cdot \mathbf{P}_x(x) \cdot \underbrace{\int_0^1 f_n^*(1-y) \cdot \mathbf{P}_y(y) \, dy}_\gamma = \alpha^{n-1} \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1}

 The second term and the second:

 \sum_{i=0}^{n-1} \alpha^i \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} \cdot \int_0^1 f_n^*(1-y) \, dy = \sum_{i=0}^{n-1} \alpha^i \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1}

and the second by the last term gives

 \int_0^1 f^*_n(1-y) \cdot f^*_n(x)\, dy = f_n^*(x)

Factoring gives

 \left( \frac{\alpha^n}{1-\alpha} + \alpha^{n-1} + \sum_{i=0}^{n-2} \alpha^i \right) \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x) = \left( \frac{\alpha^n}{1-\alpha} + \sum_{i=0}^{n-1} \alpha^i \right) \cdot \gamma \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x)

The parenthetical part of this last formulation is equivalent to

\begin{array}{ccc} \frac{\alpha^n}{1-\alpha} + \frac{1-\alpha}{1-\alpha} \cdot \sum_{i=0}^{n-1} \alpha^i & = & \frac{1}{1-\alpha} \left( \alpha^n + \sum_{i=0}^{n-1} \alpha^i - \sum_{i=0}^{n-1} \alpha^{i+1} \right) \\ & = & \frac{1}{1-\alpha} \left( \sum_{i=0}^n \alpha^i - \sum_{i=1}^n \alpha^i \right) \\ & = & \frac{1}{1-\alpha} \end{array}

within the bounds already established for \alpha, and the result of the star product is

\frac{\gamma}{1-\alpha} \cdot \mathbf{P}_x(x) \cdot \mathbf{1} + f_n^*(x) = p_\infty(x)

as we wanted to show. \qed

Claim 5. 

 e(x) = A \cdot \mathbf{P}_x(x)

A is a constant, is the eigenfunction corresponding to eigenvalue \lambda = \alpha of

p(x,y) = \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + f_n^*(x)

Proof. The eigenfunction equation is suggestive of what we must do to prove the claim:  e(x) \star p(x,y) = \lambda e(x). We must show that, starring the eigenfunction with p(x,y), we obtain \alpha times the eigenfunction.

Thus:

\begin{array}{ccc} e(x) \star p(x,y) & = & A \cdot \mathbf{P}_x(x) \star \left( \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + f_n^*(x) \right) \\ & = & A \cdot \mathbf{P}_x(x) \star \left( \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) \right) + 0 \cdot \{ A \cdot \mathbf{P}_x(x) \star f_n^*(x) \} \\ & = & A \cdot \mathbf{P}_x(x) \cdot \underbrace{\mathbf{P}_x(x) \star \mathbf{P}_y(y)}_\alpha \\ & = & \alpha \cdot A \cdot \mathbf{P}_x(x) \\ & = & \alpha \cdot e(x)\end{array}


\qed

Part I v27

  1. No comments yet.
  1. No trackbacks yet.

*