Archive

Archive for the ‘Infinite Sums’ Category

On Clarifying Some Thoughts Regarding Functions in the Unit Interval

July 10th, 2017 No comments

Here's my attempt to clarify notions a bit.  I have yet to include a lot more examples.

We focus our attention on the restricted space [ 0,1] \times \mathbb{R} and polynomial functions of finite type (finite degree)

 w(x) = a_0 + a_1 \cdot x + a_2 \cdot x^2 + \ldots + a_n \cdot x^n

or infinite type

 w_\infty(x) = b_0 + b_1 \cdot x + b_2 \cdot x^2 + \ldots + b_k \cdot x^k + \ldots

such that the area under the curve is bounded:

 \int_0^1 w_{\left( \infty \right)}(x) \, dx = \tilde{A}

More specifically, we look at a subset of these functions that are well-behaved in the sense that they possess no discontinuities and are infinitely differentiable, thus the change in notation from w_{\left( \infty \right)}(x) to \omega_{\left( \infty \right)}(x) (double-u to omega).

By the Fundamental Theorem of Calculus, integration usually depends on two boundary points so that

 \int_a^b f(x) \, dx = F(b) - F(a)

In the particular case of integration in the unit interval and \omega_{\left( \infty \right)}(x), it solely depends on one, the upper bound, since the lower bound is 0:

\int_0^1 \omega_{\left( \infty \right)}(x) \, dx = \Omega(1)

Also, powers of 1 are equal to 1, so in effect \Omega(1) is a simple sum of polynomial coefficients. We want to tease out from this \Omega(1) valuable area information, as follows:

Definition
(June 26, 2017)
Define the finite differentiator by

 D(x) = \left[ \begin{array}{c} 1 \\ 2x \\ 3x^2 \\ \vdots \\ (n+1) x^n \end{array} \right]

and the infinite differentiator by

 D_\infty(x) = \left[ \begin{array}{c} 1 \\ 2x \\ 3x^2 \\ \vdots \\ (k+1) x^k \\ \vdots \end{array} \right]

Notice the following:

Claim
(June 26, 2017)

 \int_0^1 D_{\left( \infty \right)}(x) \, dx = \mathbf{1}

Proof

 \int_0^1 D_{\left( \infty \right)}(x) \, dx = \left[ \begin{array}{c} \int_0 ^ 1 1 \, dx \\ \int_0^1 2x \, dx \\ \vdots \\ \int_0^1 (k+1) x^k \, dx \\ \vdots \end{array} \right] = \left[ \begin{array}{c} \left. x \right\vert_1 \\ \left. x^2 \right\vert_1 \\ \vdots \\ \left. x^{k+1} \right\vert_1 \\ \vdots \end{array} \right] = \mathbf{1}

We may now rewrite

 \omega_{\left( \infty \right)}(x) = v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x)

where v_{\left( \infty \right)} is a vector of constants that describes how the area accumulates:

 \int_0^1 \omega_{\left( \infty \right)}(x) \, dx = \int_0^1 v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) \, dx = v_{\left( \infty \right)} \cdot \int_0^1 D_{\left( \infty \right)}(x) \, dx = v_{\left( \infty \right)} \cdot \mathbf{1} = \sum_i v_i

Since \omega_{\left( \infty \right)}(x) were defined to have bounded areas, it is clear that the sum \sum_i v_i must converge in the (countably) infinite case.

The equation is incredibly insightful because it provides us with a bijective map between convergent sums (finite, infinite) and polynomials \omega_{\left( \infty \right)}(x). Furthermore, it tells us that there exists a class of infinite polynomial functions, namely \omega_\infty(x), that have stable, bounded area in the unit interval, despite their infinite polynomial representation. In contrast, it also tells us that there exists a class of infinite polynomial functions with unstable, unbounded areas in the unit interval (such as would have divergent \sum_i v_i ).

Our main objective is to understand how probability distributions transform in the unit interval, so it seems natural to limit the realm of possibilities to those \omega_{\left( \infty \right)}(x) for which \sum_i v_i = 1. Let us call this set \mathbf{\Omega}\left(\mathbb{Z}^+,1\right). Unfortunately not all elements of the set are probability distributions in the unit interval, since this definition still includes functions that cross the x-axis and are negative for portions of the domain. What does seem clear is that the set of probability distributions is a subset:  \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) \supset \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}\right) .

Although eventually we would like to analyze the complete geometry of \omega_{\left( \infty \right)}(x) that are probability distributions, we may want focus on a \emph{core} subset that allows us to understand essential transformation properties, as per our main objective. In order to construct it, observe that each entry of D_{\left( \infty \right)}(x) will be non-negative while x lies in the interval [0, 1]. Thus, if we require that each entry v_i in vector v_{\left( \infty \right)} be non-negative, the dot product v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) will also be non-negative in the unit interval.

Therefore, we have that \omega_{\left( \infty \right)}(x) = v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) with v_i \geq 0 for all i are probability distributions in the unit interval and define the core subset:  \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}, v_i \geq 0 \right) \subset \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}\right) \subset \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) .

Observe that vector v_{\left( \infty \right)} itself can be interpreted as a discrete probability distribution. Thus from the core subset emerges a bijective correspondence between discrete probability distributions and continuous, bounded ones in the interval [0,1].

There are essentially two ways of constructing vectors v_{\left( \infty \right)} that will produce \omega_{\left( \infty \right)}(x) in the core subset.

Construction
(June 26, 2017)
Pick any finite or countably infinite vector u_{\left( \infty \right)}, such that its entries u_i \geq 0. Then define v_{\left( \infty \right)} = \left[ \frac{u_i}{\sum_i u_i} \right] and v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) lies in the core subset, provided \sum_i u_i \neq 0 and converges.

Proof
Since all u_i \geq 0, it follows that \sum_i u_i \geq 0, and thus  v_i = \frac{u_i}{\sum_i u_i} \geq 0. This is one of the conditions that define the core subset. Another is that the sum  \sum_i v_i must equal to one. This is easily checked:

 \sum_i v_i = \sum_i \frac{u_i}{\sum_i u_i} = \frac{\sum_i u_i}{\sum_i u_i} = 1


We may call the previous construction a \emph{normalization} procedure of vectors with positive entries.

Construction
(June 26, 2017)
Suppose we want to construct v with a finite number of entries n. Pick n-1 so that v_{i} for i < n are in [0,1]. Let the last element v_n = 1 - \sum_{i, i<n} v_i, because it is constrained. If v_n < 0, repeat the procedure and stop when this is positive.

Notice that we may permute the position of the constrained element as we wish. To construct v_\infty, let the constrained element be in the first position or any indexable position; obviously the sum in the constraint is now an infinite convergent sum on elements that are not the constrained element.

This last construction is the one we choose to focus on, because it gives us a visual way of understanding of the core subset.

Example
(June 27, 2017)
Suppose we have \omega(x) = \left[ \begin{array}{cc} a & \overline{1-a} \end{array} \right] \cdot D(x). The last element is the constrained element, which we will denote by an overline to avoid confusion. We can construct the one-dimensional vector space parametrized as a \cdot \left[ 1 \right] that describes the entirety of possibilities. The core subset are those elements for which a \in [0,1]. Polynomials in the unit interval up to linear terms are included.

Example
(June 27, 2017)
A more interesting example arises when we consider \omega(x) = \left[ \begin{array}{ccc} a & b & \overline{ 1-a-b} \end{array} \right] \cdot D(x). If we forget about the constraint because it is fixed, this means we are in a two-dimensional vector space, parametrized by a and b:  a \cdot \left[\begin{array}{c} 1 \\ 0 \end{array} \right] + b \cdot \left[\begin{array}{c} 0 \\ 1 \end{array} \right] . Because polynomials up to parabolic terms are included, we will name this space the \emph{parabolic} set.

To see the geometry of the core subset, we look at the extreme values: suppose a is zero, and b will be maximally 1. Oppositely, b is zero and a is maximally 1. Finally, if the constrained entry is set to zero, then  b=1-a. The core subset is represented by an isosceles triangle and its interior.

Definition
(June 28, 2017)
Within this context, let us draw up a few definitions.

  • A discrete transform is a function T\colon \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) \to \mathbf{\Omega}\left(\mathbb{Z}^+,1\right)}, T(v_{\left( \infty \right)}) = v_{\left( \infty \right)} \cdot A_{\left( \infty \right)}, such that a matrix A_{\left( \infty \right)} with discrete entries acts on v_{\left( \infty \right)}.
  • A continuous transform is a continuous path in the space \mathbf{\Omega}\left(\mathbb{Z}^+,1\right), described by parametrizing entries of v_{\left( \infty \right)}.
    • An open path is one that connects two endpoints (a beginning and and end), such that the end is in the closure of the path (but not necessarily in the path).
    • A closed path is one without a beginning or an end and is not a single point. In two-dimensional space it encloses an area.
  • A core transform is a discrete transform that has the property of closure and therefore takes a vector in the core subset to another in the core subset. In the continuous case, the path lies within the core subset.

Example [Discrete Transform]
(July 5, 2017)
Define the discrete transform in the \emph{parabolic} set:

 T(v) = v \cdot \left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right]

Observe that

 T\left( \left[ \begin{array}{cc} \frac{1}{4} & \frac{1}{4} \end{array} \right] \right) = \left[ \begin{array}{cc} \frac{1}{4} & \frac{1}{2} \end{array} \right]

takes

 \left[\begin{array}{ccc} \frac{1}{4} & \frac{1}{4} & \overline{\frac{1}{2}} \end{array} \right] \cdot D(x) \rightsquigarrow \left[\begin{array}{ccc} \frac{1}{4} & \frac{1}{2} & \overline{\frac{1}{4}} \end{array} \right] \cdot D(x)

or, in other words,

 \frac{1}{4} + \frac{1}{2} x + \frac{3}{2}x^2 \rightsquigarrow \frac{1}{4} + x + \frac{3}{4} x^2

Although in this particular case the transform took an element in the core subset into another in the core subset, the transform is not closed in the core subset (although it is in \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) due to the constraint that the area equal to 1):

 T\left( \left[ \begin{array}{cc} 1 & 0 \end{array} \right] \right) = \left[ \begin{array}{cc} 1 & 1 \end{array} \right]

here the transform takes an element in the designated isosceles triangle to one outside it.

Because all v_i of v_{\left(\infty\right)} are positive, rendering it in effect a discrete probability distribution, there is an obvious mechanism that asserts the closure of the transform (in \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \{ 0 \}, v_i \geq 0 \right) ), taking elements in the core subset to elements in the core subset.

Example [Discrete Regular Markov Matrix Core Transform]
(July 5, 2017)
Define the discrete regular Markov matrix

 M = \left[ \begin{array}{ccc} a_{1,1} & a_{1,2} & a_{1,3} \\ a_{2,1} & a_{2,2} & a_{2,3} \\ a_{3,1} & a_{3,2} & a_{3,3} \end{array}\right]

such that  \sum_i a_{i,j} = 1 (the regularity property implies all entries of all powers of M are nonnegative). For vectors v in the \emph{core parabolic} set, the transforms

T_n(v) = v \cdot M^n

are closed in the \emph{core parabolic} set for n = \left\{1, 2, \ldots \right\} and including the transform defined by

 \lim_{n \to \infty} M^n = M^\infty

which is in the closure of the set of powers of M. A known property of M^\infty is that its rows are identical (and sum to 1):

 M^\infty = \left[ \begin{array}{ccc} m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \end{array} \right], m_1 + m_2 + m_3 = 1

It follows that any vector in the core subset is taken to  \left[ \begin{array}{ccc} m_1 & m_2 & \overline{m_3} \end{array} \right] , which itself lies in the core subset:

 T(v) = \left[ \begin{array}{ccc} v_1 & v_2 & \overline{v_3} \end{array} \right] \cdot \left[ \begin{array}{ccc} m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \end{array} \right] = \left(v_1 + v_2 + \overline{v_3} \right) \cdot \left[ \begin{array}{ccc} m_1 & m_2 & m_3 \end{array} \right] = \left[ \begin{array}{ccc} m_1 & m_2 & \overline{m_3} \end{array} \right]

From this, we can conclude that we can design the core transform that takes \emph{any} element in the core subset to a specific another simply by repeating the vector to which it has to jump to in the transform matrix.

Because the difference in all entries of M approaches zero

 \lim_{n \to \infty} \left( M^\infty - M^n \right) = 0

the collection of vectors \mathbb{T} = \left\{v, T_1(v), T_2(v), \ldots, T_k(v), \ldots \right\} draws a jumping point (often oscillating) path starting at the beginning vector v and ending at T_\infty(v). In our considerations, we may choose to include the endpoint T_\infty(v) or not, depending on whether we choose to include the matrix M^\infty or not. This arises from the notion that M^\infty is in the closure of the collection \mathbb{M} = \left\{ M^1, M^2, \ldots, M^k, \ldots \right\} . Naturally and by extension, T_\infty(v) is in the closure of the collection \mathbb{T}.

Take

 \begin{array}{ccc} M & = & \frac{1}{5} \left[\begin{array}{ccc} 1 & 2 & 2 \\ 2 & 1 & 2 \\ 2 & 2 & 1 \end{array} \right] \\ M^2 & = & \frac{1}{25} \left[\begin{array}{ccc} 9 & 8 & 8 \\ 8 & 9 & 8 \\ 8 & 8 & 9 \end{array} \right] \\ M^3 & = & \frac{1}{125} \left[\begin{array}{ccc} 41 & 42 & 42 \\ 42 & 41 & 42 \\ 42 & 42 & 41 \end{array} \right] \\ & \vdots & \\ M^\infty & = & \frac{1}{3} \left[\begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right] \end{array}

and v = \left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right] so that

 \begin{array}{ccc} v = \left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right] & \iff & f_0(x) = 1 \\ T_1(v) = \frac{1}{5} \left[ \begin{array}{ccc} 1 & 2 & \overline{2} \end{array} \right] & \iff & f_1(x) = \frac{1}{5} + \frac{4}{5} x + \frac{6}{5} x^2 \\ T_2(v) = \frac{1}{25} \left[ \begin{array}{ccc} 9 & 8 & \overline{8} \end{array} \right] & \iff & f_2(x) = \frac{9}{25} + \frac{16}{25} x + \frac{24}{25} x^2 \\ T_3(v) = \frac{1}{125} \left[ \begin{array}{ccc} 41 & 42 & \overline{42} \end{array} \right] & \iff & f_3(x) = \frac{41}{125} + \frac{84}{125} x + \frac{126}{125} x^2 \\ & \vdots & \\ \\ T_\infty (v) = \frac{1}{3} \left[ \begin{array}{ccc} 1 & 1 & \overline{1} \end{array} \right] & \iff & f_\infty(x) = \frac{1}{3} + \frac{2}{3}x + x^2 \end{array}

Example [Open Path Core Transform]
(July 7, 2017)
Again, in the core parabolic subset, an open path going from  \left[ \begin{array}{ccc} 0 & 0 & \overline{1} \end{array} \right] \cdot D(x) to  \left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right] \cdot D(x) can be constructed using the parameter \theta:

 \left[ \begin{array}{ccc} \theta & 0 & \overline{1 - \theta} \end{array} \right] \cdot D(x) \iff f_\theta(x) = \theta + 3 \cdot \left(1 - \theta \right)x^2

with  \theta \in \left[ 0,1\right) or  \theta \in \left[ 0,1\right] .

Example [Circular Path Core Transform]
(July 7, 2017)
The largest circular path, in the core parabolic subset, is:

 \left[ \begin{array}{ccc} \left(1 - \frac{1}{\sqrt{2}} \right) \cdot \left( \cos(\theta) + 1 \right) & \left(1 - \frac{1}{\sqrt{2}} \right) \cdot \left( \sin(\theta) + 1 \right) & 1 - \left(1 - \frac{1}{\sqrt{2}} \right) \cdot \left( \cos(\theta) + \sin(\theta) + 2 \right) \end{array} \right] \cdot D(x)

for \theta \in \left[0,1 \right) or \theta \in \left[ 0,1 \right].

On Convergent Sequences

November 13th, 2016 No comments

So here's a question that I've been preoccupied with in the last couple of days.  Suppose I have a convergent (at the sum) sequence, if you like a geometric sequence, say  1/2, 1/4, 1/8, ... = 1.  If one were to multiply the elements of the sequence by its corresponding index, would this still be a convergent sequence? In this case, I'm concerned with 1/2 * 1, 1/4 * 2, 1/8 * 3, ... and its convergence (at the sum).  I wish to explore whether there are particular circumstances in which this is indeed the case.

I'll leave the question open ended for a bit.  I have solved this via derivative considerations, I think (I do not mean it as a pun... I think I have solved this by considering derivatives of functions).

On Naturally Arising Differential Equations

October 18th, 2016 No comments

So if you have been following the argument a bit, it turns out that

 p(x,y,t) = \alpha^{t-1} \mathbf{P}_x(x) \cdot \mathbf{P}_y(y) + f_n^*(x)

is the starting at time t = 1 transition probability propagator of a probability distribution, say c_0(x) at t=0, in the interval x = 0 to 1.  A question that I tried to answer was how zeros are propagated via the propagator or at the probability distribution, which lead to theorems that I dubbed "Shadow Casting" because, under that context, it turns out that a zero, if found on the propagator, remained in place until infinity, and via the propagator it appears on the probability distribution we want to propagate as well (therefore casting a "shadow").  I hadn't thought of the following approach until recently, and I haven't worked it out completely, but it connects to the theory of Ordinary Differential Equations which seems interesting to me. Here's the argument:

Suppose we focus on p(x,y,1) for the time being, and wish to find the zeros on the transition probability surface.  Thus we seek p(x,y,1) = 0 and suppose y(x) is an implicit function of x. We have

 p(x,y,1) = 0 = \mathbf{P}_x(x) \cdot \mathbf{P}_y(y(x)) + f_n^*(x)

 Now let \mathbf{P}_y(y) is a collection derived from y(x), so that, for example,

 \mathbf{P}_y(y(x)) = \left[ \begin{array}{c} y(x) \\ y^{\prime}(x) \\ \vdots \\ y^{n-1}(x) \end{array} \right]

and I think we have successfully created a link to ODEs.  To find the zeros on the surface (and other time surfaces of the propagator) we stick in the correct \alpha and solve, using the familiar methods (solve the homogeneous equation and the particular solution via sin-cos-exponential solutions, variation of parameters, power series, etc.).

I'm working out the specificities, for example including the constraints we know on f_n^*(x) or \mathbf{P}_x(x).  Perhaps this approach will help us broaden the spectrum of differential equations we can solve, by linking via Shadow Casting.

It may seem basic, but I think there is some untapped power here.

Additionally, I have been working on clarifying some thoughts on polynomials that converge in area in the interval from 0 to 1, but all those details tend to be a bit drab and I keep having trouble focusing.  Nevertheless, there is a lot of clarity that I have been able to include, and it is now in the newest draft of "Compendium".  By the way, I renamed it. It is now "Compendium of Claims and Proofs on How Probability Distributions Transform".  There's still soooo much more to do.

Here it is! part-i-v28

More On Convergent-in-Area Polynomials in the Interval [0,1]

October 9th, 2014 No comments

So remember last time I wrote about the rich algebraic structure of polynomials that have convergent area in the interval [0,1]? Turns out there is a lot that can be done, even more than I imagined.  I have already begun to sketch the properties out and have a long ways to go still, but at least I have defined operations on these polynomials that I think can be very useful.

Here's my newest version of Compendium where I have added these things.

Part I v23

V19 of Compendium

August 12th, 2014 No comments

I have added version 19 of Compendium! Seems like we've come a long way!  I feel like up to around page 35ish it is pretty solid.  Will still work on tying it to QM more directly.

Part I v19