Author Archive

On Clarifying Some Thoughts Regarding Functions in the Unit Interval

Here's my attempt to clarify notions a bit.  I have yet to include a lot more examples.

We focus our attention on the restricted space $[ 0,1] \times \mathbb{R}$ and polynomial functions of finite type (finite degree)

or infinite type

such that the area under the curve is bounded:

More specifically, we look at a subset of these functions that are well-behaved in the sense that they possess no discontinuities and are infinitely differentiable, thus the change in notation from $w_{\left( \infty \right)}(x)$ to $\omega_{\left( \infty \right)}(x)$ (double-u to omega).

By the Fundamental Theorem of Calculus, integration usually depends on two boundary points so that

In the particular case of integration in the unit interval and $\omega_{\left( \infty \right)}(x)$, it solely depends on one, the upper bound, since the lower bound is 0:

Also, powers of 1 are equal to 1, so in effect $\Omega(1)$ is a simple sum of polynomial coefficients. We want to tease out from this $\Omega(1)$ valuable area information, as follows:

Definition
(June 26, 2017)
Define the finite differentiator by

and the infinite differentiator by

Notice the following:

Claim
(June 26, 2017)

Proof

We may now rewrite

where $v_{\left( \infty \right)}$ is a vector of constants that describes how the area accumulates:

Since $\omega_{\left( \infty \right)}(x)$ were defined to have bounded areas, it is clear that the sum $\sum_i v_i$ must converge in the (countably) infinite case.

The equation is incredibly insightful because it provides us with a bijective map between convergent sums (finite, infinite) and polynomials $\omega_{\left( \infty \right)}(x)$. Furthermore, it tells us that there exists a class of infinite polynomial functions, namely $\omega_\infty(x)$, that have stable, bounded area in the unit interval, despite their infinite polynomial representation. In contrast, it also tells us that there exists a class of infinite polynomial functions with unstable, unbounded areas in the unit interval (such as would have divergent $\sum_i v_i$).

Our main objective is to understand how probability distributions transform in the unit interval, so it seems natural to limit the realm of possibilities to those $\omega_{\left( \infty \right)}(x)$ for which $\sum_i v_i = 1$. Let us call this set $\mathbf{\Omega}\left(\mathbb{Z}^+,1\right)$. Unfortunately not all elements of the set are probability distributions in the unit interval, since this definition still includes functions that cross the x-axis and are negative for portions of the domain. What does seem clear is that the set of probability distributions is a subset: $\mathbf{\Omega}\left(\mathbb{Z}^+,1\right) \supset \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}\right)$.

Although eventually we would like to analyze the complete geometry of $\omega_{\left( \infty \right)}(x)$ that are probability distributions, we may want focus on a \emph{core} subset that allows us to understand essential transformation properties, as per our main objective. In order to construct it, observe that each entry of $D_{\left( \infty \right)}(x)$ will be non-negative while $x$ lies in the interval $[0, 1]$. Thus, if we require that each entry $v_i$ in vector $v_{\left( \infty \right)}$ be non-negative, the dot product $v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x)$ will also be non-negative in the unit interval.

Therefore, we have that $\omega_{\left( \infty \right)}(x) = v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x)$ with $v_i \geq 0$ for all $i$ are probability distributions in the unit interval and define the core subset: $\mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}, v_i \geq 0 \right) \subset \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}\right) \subset \mathbf{\Omega}\left(\mathbb{Z}^+,1\right)$.

Observe that vector $v_{\left( \infty \right)}$ itself can be interpreted as a discrete probability distribution. Thus from the core subset emerges a bijective correspondence between discrete probability distributions and continuous, bounded ones in the interval $[0,1]$.

There are essentially two ways of constructing vectors $v_{\left( \infty \right)}$ that will produce $\omega_{\left( \infty \right)}(x)$ in the core subset.

Construction
(June 26, 2017)
Pick any finite or countably infinite vector $u_{\left( \infty \right)}$, such that its entries $u_i \geq 0$. Then define $v_{\left( \infty \right)} = \left[ \frac{u_i}{\sum_i u_i} \right]$ and $v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x)$ lies in the core subset, provided $\sum_i u_i \neq 0$ and converges.

Proof
Since all $u_i \geq 0$, it follows that $\sum_i u_i \geq 0$, and thus $v_i = \frac{u_i}{\sum_i u_i} \geq 0$. This is one of the conditions that define the core subset. Another is that the sum $\sum_i v_i$ must equal to one. This is easily checked:

We may call the previous construction a \emph{normalization} procedure of vectors with positive entries.

Construction
(June 26, 2017)
Suppose we want to construct $v$ with a finite number of entries $n$. Pick $n-1$ so that $v_{i}$ for $i < n$ are in $[0,1]$. Let the last element $v_n = 1 - \sum_{i, i, because it is constrained. If $v_n < 0$, repeat the procedure and stop when this is positive.

Notice that we may permute the position of the constrained element as we wish. To construct $v_\infty$, let the constrained element be in the first position or any indexable position; obviously the sum in the constraint is now an infinite convergent sum on elements that are not the constrained element.

This last construction is the one we choose to focus on, because it gives us a visual way of understanding of the core subset.

Example
(June 27, 2017)
Suppose we have $\omega(x) = \left[ \begin{array}{cc} a & \overline{1-a} \end{array} \right] \cdot D(x)$. The last element is the constrained element, which we will denote by an overline to avoid confusion. We can construct the one-dimensional vector space parametrized as $a \cdot \left[ 1 \right]$ that describes the entirety of possibilities. The core subset are those elements for which $a \in [0,1]$. Polynomials in the unit interval up to linear terms are included.

Example
(June 27, 2017)
A more interesting example arises when we consider $\omega(x) = \left[ \begin{array}{ccc} a & b & \overline{ 1-a-b} \end{array} \right] \cdot D(x)$. If we forget about the constraint because it is fixed, this means we are in a two-dimensional vector space, parametrized by $a$ and $b$: $a \cdot \left[\begin{array}{c} 1 \\ 0 \end{array} \right] + b \cdot \left[\begin{array}{c} 0 \\ 1 \end{array} \right]$. Because polynomials up to parabolic terms are included, we will name this space the \emph{parabolic} set.

To see the geometry of the core subset, we look at the extreme values: suppose $a$ is zero, and $b$ will be maximally 1. Oppositely, $b$ is zero and $a$ is maximally 1. Finally, if the constrained entry is set to zero, then $b=1-a$. The core subset is represented by an isosceles triangle and its interior.

Definition
(June 28, 2017)
Within this context, let us draw up a few definitions.

• A discrete transform is a function $T\colon \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) \to \mathbf{\Omega}\left(\mathbb{Z}^+,1\right)$}, $T(v_{\left( \infty \right)}) = v_{\left( \infty \right)} \cdot A_{\left( \infty \right)}$, such that a matrix $A_{\left( \infty \right)}$ with discrete entries acts on $v_{\left( \infty \right)}$.
• A continuous transform is a continuous path in the space $\mathbf{\Omega}\left(\mathbb{Z}^+,1\right)$, described by parametrizing entries of $v_{\left( \infty \right)}$.
• An open path is one that connects two endpoints (a beginning and and end), such that the end is in the closure of the path (but not necessarily in the path).
• A closed path is one without a beginning or an end and is not a single point. In two-dimensional space it encloses an area.
• A core transform is a discrete transform that has the property of closure and therefore takes a vector in the core subset to another in the core subset. In the continuous case, the path lies within the core subset.

Example [Discrete Transform]
(July 5, 2017)
Define the discrete transform in the \emph{parabolic} set:

Observe that

takes

or, in other words,

Although in this particular case the transform took an element in the core subset into another in the core subset, the transform is not closed in the core subset (although it is in $\mathbf{\Omega}\left(\mathbb{Z}^+,1\right)$ due to the constraint that the area equal to 1):

here the transform takes an element in the designated isosceles triangle to one outside it.

Because all $v_i$ of $v_{\left(\infty\right)}$ are positive, rendering it in effect a discrete probability distribution, there is an obvious mechanism that asserts the closure of the transform (in $\mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \{ 0 \}, v_i \geq 0 \right)$ ), taking elements in the core subset to elements in the core subset.

Example [Discrete Regular Markov Matrix Core Transform]
(July 5, 2017)
Define the discrete regular Markov matrix

such that $\sum_i a_{i,j} = 1$ (the regularity property implies all entries of all powers of $M$ are nonnegative). For vectors $v$ in the \emph{core parabolic} set, the transforms

are closed in the \emph{core parabolic} set for $n = \left\{1, 2, \ldots \right\}$ and including the transform defined by

which is in the closure of the set of powers of $M$. A known property of $M^\infty$ is that its rows are identical (and sum to 1):

It follows that any vector in the core subset is taken to $\left[ \begin{array}{ccc} m_1 & m_2 & \overline{m_3} \end{array} \right]$, which itself lies in the core subset:

From this, we can conclude that we can design the core transform that takes \emph{any} element in the core subset to a specific another simply by repeating the vector to which it has to jump to in the transform matrix.

Because the difference in all entries of $M$ approaches zero

the collection of vectors $\mathbb{T} = \left\{v, T_1(v), T_2(v), \ldots, T_k(v), \ldots \right\}$ draws a jumping point (often oscillating) path starting at the beginning vector $v$ and ending at $T_\infty(v)$. In our considerations, we may choose to include the endpoint $T_\infty(v)$ or not, depending on whether we choose to include the matrix $M^\infty$ or not. This arises from the notion that $M^\infty$ is in the closure of the collection $\mathbb{M} = \left\{ M^1, M^2, \ldots, M^k, \ldots \right\}$. Naturally and by extension, $T_\infty(v)$ is in the closure of the collection $\mathbb{T}$.

Take

and $v = \left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right]$ so that

Example [Open Path Core Transform]
(July 7, 2017)
Again, in the core parabolic subset, an open path going from $\left[ \begin{array}{ccc} 0 & 0 & \overline{1} \end{array} \right] \cdot D(x)$ to $\left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right] \cdot D(x)$ can be constructed using the parameter $\theta$:

with $\theta \in \left[ 0,1\right)$ or $\theta \in \left[ 0,1\right]$.

Example [Circular Path Core Transform]
(July 7, 2017)
The largest circular path, in the core parabolic subset, is:

for $\theta \in \left[0,1 \right)$ or $\theta \in \left[ 0,1 \right]$.

On Shifted Legendre Polynomial Coefficients

So here it is: the shifted Legendre polynomial coefficients are:

with $k \leq n$ and $m = 1 \ldots n$, so that

I'd love to hear your ideas of a proof.  Mine is a very particular thorny meticulous one. I may be off on the sign, but the absolute value is right. Let me know!

Categories:

On Convergent Sequences

So here's a question that I've been preoccupied with in the last couple of days.  Suppose I have a convergent (at the sum) sequence, if you like a geometric sequence, say $1/2, 1/4, 1/8, ... = 1$.  If one were to multiply the elements of the sequence by its corresponding index, would this still be a convergent sequence? In this case, I'm concerned with $1/2 * 1, 1/4 * 2, 1/8 * 3, ...$ and its convergence (at the sum).  I wish to explore whether there are particular circumstances in which this is indeed the case.

I'll leave the question open ended for a bit.  I have solved this via derivative considerations, I think (I do not mean it as a pun... I think I have solved this by considering derivatives of functions).

Categories:

On Naturally Arising Differential Equations

So if you have been following the argument a bit, it turns out that

is the starting at time t = 1 transition probability propagator of a probability distribution, say $c_0(x)$ at t=0, in the interval x = 0 to 1.  A question that I tried to answer was how zeros are propagated via the propagator or at the probability distribution, which lead to theorems that I dubbed "Shadow Casting" because, under that context, it turns out that a zero, if found on the propagator, remained in place until infinity, and via the propagator it appears on the probability distribution we want to propagate as well (therefore casting a "shadow").  I hadn't thought of the following approach until recently, and I haven't worked it out completely, but it connects to the theory of Ordinary Differential Equations which seems interesting to me. Here's the argument:

Suppose we focus on $p(x,y,1)$ for the time being, and wish to find the zeros on the transition probability surface.  Thus we seek $p(x,y,1) = 0$ and suppose $y(x)$ is an implicit function of $x$. We have

Now let $\mathbf{P}_y(y)$ is a collection derived from $y(x)$, so that, for example,

and I think we have successfully created a link to ODEs.  To find the zeros on the surface (and other time surfaces of the propagator) we stick in the correct $\alpha$ and solve, using the familiar methods (solve the homogeneous equation and the particular solution via sin-cos-exponential solutions, variation of parameters, power series, etc.).

I'm working out the specificities, for example including the constraints we know on $f_n^*(x)$ or $\mathbf{P}_x(x)$.  Perhaps this approach will help us broaden the spectrum of differential equations we can solve, by linking via Shadow Casting.

It may seem basic, but I think there is some untapped power here.

Additionally, I have been working on clarifying some thoughts on polynomials that converge in area in the interval from 0 to 1, but all those details tend to be a bit drab and I keep having trouble focusing.  Nevertheless, there is a lot of clarity that I have been able to include, and it is now in the newest draft of "Compendium".  By the way, I renamed it. It is now "Compendium of Claims and Proofs on How Probability Distributions Transform".  There's still soooo much more to do.

Here it is! part-i-v28

On Riding the Wave

So here it is, the pinnacle of my research effort thus far. I'll start by the definitions:

Definition 1. Let $f(x,y)$ and $g(x,y)$ be surfaces so that $f,g \colon [0,1] \times [0,1] \to \mathbb{R}$. The star operator $\star \colon [0,1]^2 \times [0,1]^2 \to [0,1]^2$ takes two surfaces and creates another in the following way:

with the central transformation being defined by $\diamond \colon [0,1]^2 \times [0,1]^2 \to [0,1]^2$

and the last transformation that takes $h(x,z) \rightsquigarrow h(x,y)$ we will call $j \colon [0,1]^2 \to [0,1]^2$. Thus

Definition 2. Define a continuous, bounded surface $p(x,y)$, with $p \colon [ 0, 1 ] \times [ 0, 1 ] \to \mathbb{R}^+ \cup \{0\}$,  and let $\int_0^1 p(x,y) \, dx = 1$ be true regardless of the value of $y$.    In other words, integrating such surface with respect to $x$ yields the uniform  probability distribution $u(y)$, $u \colon [0,1] \to \{1\}$. We will call this a strict Pasquali patch, and such is intimately related to probability notions.  With $p \colon [ 0, 1 ] \times [ 0, 1 ] \to \mathbb{R}$, we have a more general definition for a Pasquali patch.

Construction 1. Let $p(x,y) = \sum_{i = 1}^n f_i(x) \cdot g_i(y) = \mathbf{f}(x) \cdot \mathbf{g}(y)$, a function which consists of a finite sum of pairs of functions of $x$ and $y$. In the spirit of conciseness, we omit the transpose symbology, thusly understanding the first vector listed in the dot product as a row vector and the second vector as a column vector.  Then $p(x,y)$ is a Pasquali patch provided

and $F_n \neq 0$. Thus, we may choose $n-1$ arbitrary functions of $x$,  $n-1$ arbitrary functions of $y$, an $n$th function of $x$ so that $F_n \neq 0$, and

We may write the normalized version as:

and again observe that the unit contribution to the integral of the Pasquali patch is provided by $f_n^*(x)$, so that $F_n^* = 1$.

Claim 1. Pasquali patches constructed as by Construction 1 are closed.

Proof. To make the proof clear, let us relabel the normalized version of Construction 1 as

with $\int_0^1 \mathbf{P}_x(x) \, dx = 0$ so as to manipulate the equation more simply, and

with $F_n^* = 1$ and $H_n^* = 1$. Then

with $\alpha = \int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{Q}_y(y) \, dy$, $\beta = \int_0^1 \mathbf{P}_x(1-y) \cdot \mathbf{1} \, dy = 0$, and $\gamma = \int_0^1 f_n^*(1-y) \cdot \mathbf{Q}_y(y) \, dy$. We're not too concerned of the form of the resultant star product as much as its structure. Observe

can be folded back into function vectors $\mathbf{R}_x(x)$ and $\mathbf{R}_y(y)$. Thus the structure of Construction 1 functions is preserved when we multiply one by another, showing closure. Of course the property of Construction 1 being Pasquali patches means $r(x,y)$ is closed under that property, and so is a Pasquali patch also, as can be seen when we integrate across $x$:

and the unit contribution is given by $h_n^*(x)$. \qed

Claim 2. Pasquali patches constructed as by Construction 1 have powers:

with

and

Proof by Induction. First, using the formula observe

and the second power

which is exactly what we expect from the definition of Construction 1 and Claim 1. Next, let us assume that the formula works and

Let us examine $p_{k+1}(x,y) = p_k(x,y) \star p(x,y)$

term by term upon dotting. The first dot the first term is:

The first dot the last term is:

The last dot the first term is:

The last dot the last term is:

The middle dot the first term is:

Finally, the middle dot the last term vanishes:

Putting all this information together we get:

\qed

Claim 3. It follows that

and $\lvert \alpha \rvert < 1$, $\gamma$ bounded, both conditions necessary and sufficient to establish that such a limiting surface indeed exists (convergence criterion). Furthermore, we check that this is indeed a Pasquali patch.

Proof. To reach a steady state limit,

Next, the steady state limit must be solely functions of $x$, so the functions of $y$ must vanish at the limit. Thus, it follows that $\lvert \alpha \rvert < 1$. We have now established bounds on $\alpha$, which happen to be exactly the radius of convergence of the geometric series:

and

gives the desired result:

As a check, we integrate across $x$ to corroborate the definition of Pasquali patch:

Claim 4.

is the eigenfunction corresponding to eigenvalue $\lambda = 1$ of all Construction 1 functions, through each power independently.

Proof. An eigenfunction $e(x)$ has the property

where the eigenfunction's   corresponding eigenvalue is $\lambda$.  The claim is more ambitious, and we will show that $p_\infty(x) \star p_n(x,y) = 1 \cdot p_\infty(x)$ for any $n \in \mathbb{Z}^+$.  The left-hand side is

Observe the first term dotted with the middle and last term produce $\beta$ which annihilates the results, so that the only relevant term is the first dot the first:

The second term dot the first produces:

The second term and the second:

and the second by the last term gives

Factoring gives

The parenthetical part of this last formulation is equivalent to

within the bounds already established for $\alpha$, and the result of the star product is

as we wanted to show. \qed

Claim 5.

$A$ is a constant, is the eigenfunction corresponding to eigenvalue $\lambda = \alpha$ of

Proof. The eigenfunction equation is suggestive of what we must do to prove the claim: $e(x) \star p(x,y) = \lambda e(x)$. We must show that, starring the eigenfunction with $p(x,y)$, we obtain $\alpha$ times the eigenfunction.

Thus:

\qed

Part I v27