On Claims Derived from Shifted Legendre Polynomial Coefficient Formulas

January 7th, 2018 No comments

A few combinatorics arguments... perhaps you can think of some proofs? Hint: I did not approach this via induction proofs:

General Multiplicative Notions


 \binom{n}{k-1} \binom{n+k-1}{k-1} = k n \left( \frac{1}{n-k} \right) \left(\frac{1}{n-k+1} \right) \prod_{m \neq n, k} \left| \frac{n}{m-k} -1 \right|

for n \neq k and n \neq k - 1


 \binom{2n - 1}{n-1} = \binom{2n - 1}{n} = \prod_{m \neq n} \frac{2n-m}{n-m}


 \binom{2n}{n} = \left(n + 1 \right) \prod_{m \neq n} \frac{2n - m + 1}{n + m + 1} = \prod_{m} \frac{2n - m + 1}{n - m + 1} = \prod_m \frac{2n - m + 1}{m}

Corollary: Binomial Multiplicative Formula

 \binom{n}{k} = \prod_{m = 1}^k \frac{n-m+1}{k-m+1} = \prod_{m = 1}^k \frac{n-m+1}{m}


 2 \prod_{m \neq n} \frac{2n - m}{n - m} = \left( n + 1 \right) \prod_{m \neq n} \frac{2n - m + 1}{n - m + 1}

Catalan Numbers


 \frac{1}{n+1} \binom{2n}{n} = \prod_{m \neq n} \frac{2n - m + 1}{n - m + 1}


 \prod_{m = 2}^n \frac{n+m}{m} = \prod_{m \neq n} \frac{2n - m + 1}{n - m + 1} = \frac{2}{n+1} \prod_{m \neq n} \frac{2n - m}{n - m}

On the Remarkable Fact that a Sequence with Convergent Sum when Dotted with the Harmonic Sequence Yields a New Convergent Sum

December 3rd, 2017 No comments

This is kind of incredible: take a sequence with convergent sum

 \left[a_i \right] \cdot \mathbf{1} = \sum_i a_i < \pm \infty

and let

 H = \left[ \begin{array}{c} 1 \\ \frac{1}{2} \\ \frac{1}{3} \\ \vdots \\ \frac{1}{k} \\ \vdots \end{array} \right]

 It turns out that

 \left[ a_i \right] \cdot H < \pm \infty

A corollary of this is that if we define

 H^q = \left[ \begin{array}{c} 1 \\ \frac{1}{2^q} \\ \frac{1}{3^q} \\ \vdots\\ \frac{1}{k^q} \\ \vdots \end{array} \right]


 \left[ a_i \right] \cdot H^q < \pm \infty

With this we can prove that the p-series for p>2 converges.  Take the known fact that

 \sum_i \frac{1}{i^2} < \pm \infty \iff \left[ \begin{array}{cccccc} 1 &\frac{1}{2^2}&\frac{1}{3^2}&\ldots&\frac{1}{i^2}&\ldots\end{array}\right]\cdot\mathbf{1} < \pm \infty


 \left[ \begin{array}{cccccc} 1 &\frac{1}{2^2}&\frac{1}{3^2}&\ldots&\frac{1}{i^2}&\ldots\end{array} \right] \cdot H = \sum_i \frac{1}{i^{2+1}} < \pm \infty

 Clearly, repeated application of H yields:  

 \left[ \begin{array}{cccccc} 1 &\frac{1}{2^2}&\frac{1}{3^2}&\ldots&\frac{1}{i^2}&\ldots\end{array} \right] \cdot H^q = \sum_i \frac{1}{i^{2+q}} < \pm \infty

Next define

 H^{-q} = \left[\begin{array}{c} 1 \\ 2^q \\ 3^q \\ \vdots \\ k^q \\ \vdots \end{array} \right]


 \left[ \begin{array}{cccccc} 1 &\frac{1}{2^2}&\frac{1}{3^2}&\ldots&\frac{1}{i^2} & \ldots\end{array} \right] \cdot H^{-1} = \sum_i \frac{1}{i^1}

diverges, it seems clear that H as a function (say, h) is not surjective.

This clears up the question I had about whether a sequence with convergent sum dot H^{-1} was convergent (answer: not generally).

Let me know if you are interested in a proof (which does not rely on the Comparison Test).

On Clarifying Some Thoughts Regarding Functions in the Unit Interval

July 10th, 2017 No comments

Here's my attempt to clarify notions a bit.  I have yet to include a lot more examples.

We focus our attention on the restricted space [ 0,1] \times \mathbb{R} and polynomial functions of finite type (finite degree)

 w(x) = a_0 + a_1 \cdot x + a_2 \cdot x^2 + \ldots + a_n \cdot x^n

or infinite type

 w_\infty(x) = b_0 + b_1 \cdot x + b_2 \cdot x^2 + \ldots + b_k \cdot x^k + \ldots

such that the area under the curve is bounded:

 \int_0^1 w_{\left( \infty \right)}(x) \, dx = \tilde{A}

More specifically, we look at a subset of these functions that are well-behaved in the sense that they possess no discontinuities and are infinitely differentiable, thus the change in notation from w_{\left( \infty \right)}(x) to \omega_{\left( \infty \right)}(x) (double-u to omega).

By the Fundamental Theorem of Calculus, integration usually depends on two boundary points so that

 \int_a^b f(x) \, dx = F(b) - F(a)

In the particular case of integration in the unit interval and \omega_{\left( \infty \right)}(x), it solely depends on one, the upper bound, since the lower bound is 0:

\int_0^1 \omega_{\left( \infty \right)}(x) \, dx = \Omega(1)

Also, powers of 1 are equal to 1, so in effect \Omega(1) is a simple sum of polynomial coefficients. We want to tease out from this \Omega(1) valuable area information, as follows:

(June 26, 2017)
Define the finite differentiator by

 D(x) = \left[ \begin{array}{c} 1 \\ 2x \\ 3x^2 \\ \vdots \\ (n+1) x^n \end{array} \right]

and the infinite differentiator by

 D_\infty(x) = \left[ \begin{array}{c} 1 \\ 2x \\ 3x^2 \\ \vdots \\ (k+1) x^k \\ \vdots \end{array} \right]

Notice the following:

(June 26, 2017)

 \int_0^1 D_{\left( \infty \right)}(x) \, dx = \mathbf{1}


 \int_0^1 D_{\left( \infty \right)}(x) \, dx = \left[ \begin{array}{c} \int_0 ^ 1 1 \, dx \\ \int_0^1 2x \, dx \\ \vdots \\ \int_0^1 (k+1) x^k \, dx \\ \vdots \end{array} \right] = \left[ \begin{array}{c} \left. x \right\vert_1 \\ \left. x^2 \right\vert_1 \\ \vdots \\ \left. x^{k+1} \right\vert_1 \\ \vdots \end{array} \right] = \mathbf{1}

We may now rewrite

 \omega_{\left( \infty \right)}(x) = v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x)

where v_{\left( \infty \right)} is a vector of constants that describes how the area accumulates:

 \int_0^1 \omega_{\left( \infty \right)}(x) \, dx = \int_0^1 v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) \, dx = v_{\left( \infty \right)} \cdot \int_0^1 D_{\left( \infty \right)}(x) \, dx = v_{\left( \infty \right)} \cdot \mathbf{1} = \sum_i v_i

Since \omega_{\left( \infty \right)}(x) were defined to have bounded areas, it is clear that the sum \sum_i v_i must converge in the (countably) infinite case.

The equation is incredibly insightful because it provides us with a bijective map between convergent sums (finite, infinite) and polynomials \omega_{\left( \infty \right)}(x). Furthermore, it tells us that there exists a class of infinite polynomial functions, namely \omega_\infty(x), that have stable, bounded area in the unit interval, despite their infinite polynomial representation. In contrast, it also tells us that there exists a class of infinite polynomial functions with unstable, unbounded areas in the unit interval (such as would have divergent \sum_i v_i ).

Our main objective is to understand how probability distributions transform in the unit interval, so it seems natural to limit the realm of possibilities to those \omega_{\left( \infty \right)}(x) for which \sum_i v_i = 1. Let us call this set \mathbf{\Omega}\left(\mathbb{Z}^+,1\right). Unfortunately not all elements of the set are probability distributions in the unit interval, since this definition still includes functions that cross the x-axis and are negative for portions of the domain. What does seem clear is that the set of probability distributions is a subset:  \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) \supset \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}\right) .

Although eventually we would like to analyze the complete geometry of \omega_{\left( \infty \right)}(x) that are probability distributions, we may want focus on a \emph{core} subset that allows us to understand essential transformation properties, as per our main objective. In order to construct it, observe that each entry of D_{\left( \infty \right)}(x) will be non-negative while x lies in the interval [0, 1]. Thus, if we require that each entry v_i in vector v_{\left( \infty \right)} be non-negative, the dot product v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) will also be non-negative in the unit interval.

Therefore, we have that \omega_{\left( \infty \right)}(x) = v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) with v_i \geq 0 for all i are probability distributions in the unit interval and define the core subset:  \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}, v_i \geq 0 \right) \subset \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \left\{ 0 \right\}\right) \subset \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) .

Observe that vector v_{\left( \infty \right)} itself can be interpreted as a discrete probability distribution. Thus from the core subset emerges an injection between discrete probability distributions and continuous, bounded ones in the interval [0,1].

There are essentially two ways of constructing vectors v_{\left( \infty \right)} that will produce \omega_{\left( \infty \right)}(x) in the core subset.

(June 26, 2017)
Pick any finite or countably infinite vector u_{\left( \infty \right)}, such that its entries u_i \geq 0. Then define v_{\left( \infty \right)} = \left[ \frac{u_i}{\sum_i u_i} \right] and v_{\left( \infty \right)} \cdot D_{\left( \infty \right)}(x) lies in the core subset, provided \sum_i u_i \neq 0 and converges.

Since all u_i \geq 0, it follows that \sum_i u_i \geq 0, and thus  v_i = \frac{u_i}{\sum_i u_i} \geq 0. This is one of the conditions that define the core subset. Another is that the sum  \sum_i v_i must equal to one. This is easily checked:

 \sum_i v_i = \sum_i \frac{u_i}{\sum_i u_i} = \frac{\sum_i u_i}{\sum_i u_i} = 1

We may call the previous construction a \emph{normalization} procedure of vectors with positive entries.

(June 26, 2017)
Suppose we want to construct v with a finite number of entries n. Pick n-1 so that v_{i} for i < n are in [0,1]. Let the last element v_n = 1 - \sum_{i, i<n} v_i, because it is constrained. If v_n < 0, repeat the procedure and stop when this is positive.

Notice that we may permute the position of the constrained element as we wish. To construct v_\infty, let the constrained element be in the first position or any indexable position; obviously the sum in the constraint is now an infinite convergent sum on elements that are not the constrained element.

This last construction is the one we choose to focus on, because it gives us a visual way of understanding of the core subset.

(June 27, 2017)
Suppose we have \omega(x) = \left[ \begin{array}{cc} a & \overline{1-a} \end{array} \right] \cdot D(x). The last element is the constrained element, which we will denote by an overline to avoid confusion. We can construct the one-dimensional vector space parametrized as a \cdot \left[ 1 \right] that describes the entirety of possibilities. The core subset are those elements for which a \in [0,1]. Polynomials in the unit interval up to linear terms are included.

(June 27, 2017)
A more interesting example arises when we consider \omega(x) = \left[ \begin{array}{ccc} a & b & \overline{ 1-a-b} \end{array} \right] \cdot D(x). If we forget about the constraint because it is fixed, this means we are in a two-dimensional vector space, parametrized by a and b:  a \cdot \left[\begin{array}{c} 1 \\ 0 \end{array} \right] + b \cdot \left[\begin{array}{c} 0 \\ 1 \end{array} \right] . Because polynomials up to parabolic terms are included, we will name this space the \emph{parabolic} set.

To see the geometry of the core subset, we look at the extreme values: suppose a is zero, and b will be maximally 1. Oppositely, b is zero and a is maximally 1. Finally, if the constrained entry is set to zero, then  b=1-a. The core subset is represented by an isosceles triangle and its interior.

(June 28, 2017)
Within this context, let us draw up a few definitions.

  • A discrete transform is a function T\colon \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) \to \mathbf{\Omega}\left(\mathbb{Z}^+,1\right)}, T(v_{\left( \infty \right)}) = v_{\left( \infty \right)} \cdot A_{\left( \infty \right)}, such that a matrix A_{\left( \infty \right)} with discrete entries acts on v_{\left( \infty \right)}.
  • A continuous transform is a continuous path in the space \mathbf{\Omega}\left(\mathbb{Z}^+,1\right), described by parametrizing entries of v_{\left( \infty \right)}.
    • An open path is one that connects two endpoints (a beginning and and end), such that the end is in the closure of the path (but not necessarily in the path).
    • A closed path is one without a beginning or an end and is not a single point. In two-dimensional space it encloses an area.
  • A core transform is a discrete transform that has the property of closure and therefore takes a vector in the core subset to another in the core subset. In the continuous case, the path lies within the core subset.

Example [Discrete Transform]
(July 5, 2017)
Define the discrete transform in the \emph{parabolic} set:

 T(v) = v \cdot \left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right]

Observe that

 T\left( \left[ \begin{array}{cc} \frac{1}{4} & \frac{1}{4} \end{array} \right] \right) = \left[ \begin{array}{cc} \frac{1}{4} & \frac{1}{2} \end{array} \right]


 \left[\begin{array}{ccc} \frac{1}{4} & \frac{1}{4} & \overline{\frac{1}{2}} \end{array} \right] \cdot D(x) \rightsquigarrow \left[\begin{array}{ccc} \frac{1}{4} & \frac{1}{2} & \overline{\frac{1}{4}} \end{array} \right] \cdot D(x)

or, in other words,

 \frac{1}{4} + \frac{1}{2} x + \frac{3}{2}x^2 \rightsquigarrow \frac{1}{4} + x + \frac{3}{4} x^2

Although in this particular case the transform took an element in the core subset into another in the core subset, the transform is not closed in the core subset (although it is in \mathbf{\Omega}\left(\mathbb{Z}^+,1\right) due to the constraint that the area equal to 1):

 T\left( \left[ \begin{array}{cc} 1 & 0 \end{array} \right] \right) = \left[ \begin{array}{cc} 1 & 1 \end{array} \right]

here the transform takes an element in the designated isosceles triangle to one outside it.

Because all v_i of v_{\left(\infty\right)} are positive, rendering it in effect a discrete probability distribution, there is an obvious mechanism that asserts the closure of the transform (in \mathbf{\Omega}\left(\mathbb{Z}^+,1, \mathbb{R}^+ \cup \{ 0 \}, v_i \geq 0 \right) ), taking elements in the core subset to elements in the core subset.

Example [Discrete Regular Markov Matrix Core Transform]
(July 5, 2017)
Define the discrete regular Markov matrix

 M = \left[ \begin{array}{ccc} a_{1,1} & a_{1,2} & a_{1,3} \\ a_{2,1} & a_{2,2} & a_{2,3} \\ a_{3,1} & a_{3,2} & a_{3,3} \end{array}\right]

such that  \sum_i a_{i,j} = 1 (the regularity property implies all entries of all powers of M are nonnegative). For vectors v in the \emph{core parabolic} set, the transforms

T_n(v) = v \cdot M^n

are closed in the \emph{core parabolic} set for n = \left\{1, 2, \ldots \right\} and including the transform defined by

 \lim_{n \to \infty} M^n = M^\infty

which is in the closure of the set of powers of M. A known property of M^\infty is that its rows are identical (and sum to 1):

 M^\infty = \left[ \begin{array}{ccc} m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \end{array} \right], m_1 + m_2 + m_3 = 1

It follows that any vector in the core subset is taken to  \left[ \begin{array}{ccc} m_1 & m_2 & \overline{m_3} \end{array} \right] , which itself lies in the core subset:

 T(v) = \left[ \begin{array}{ccc} v_1 & v_2 & \overline{v_3} \end{array} \right] \cdot \left[ \begin{array}{ccc} m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \\ m_1 & m_2 & m_3 \end{array} \right] = \left(v_1 + v_2 + \overline{v_3} \right) \cdot \left[ \begin{array}{ccc} m_1 & m_2 & m_3 \end{array} \right] = \left[ \begin{array}{ccc} m_1 & m_2 & \overline{m_3} \end{array} \right]

From this, we can conclude that we can design the core transform that takes \emph{any} element in the core subset to a specific another simply by repeating the vector to which it has to jump to in the transform matrix.

Because the difference in all entries of M approaches zero

 \lim_{n \to \infty} \left( M^\infty - M^n \right) = 0

the collection of vectors \mathbb{T} = \left\{v, T_1(v), T_2(v), \ldots, T_k(v), \ldots \right\} draws a jumping point (often oscillating) path starting at the beginning vector v and ending at T_\infty(v). In our considerations, we may choose to include the endpoint T_\infty(v) or not, depending on whether we choose to include the matrix M^\infty or not. This arises from the notion that M^\infty is in the closure of the collection \mathbb{M} = \left\{ M^1, M^2, \ldots, M^k, \ldots \right\} . Naturally and by extension, T_\infty(v) is in the closure of the collection \mathbb{T}.


 \begin{array}{ccc} M & = & \frac{1}{5} \left[\begin{array}{ccc} 1 & 2 & 2 \\ 2 & 1 & 2 \\ 2 & 2 & 1 \end{array} \right] \\ M^2 & = & \frac{1}{25} \left[\begin{array}{ccc} 9 & 8 & 8 \\ 8 & 9 & 8 \\ 8 & 8 & 9 \end{array} \right] \\ M^3 & = & \frac{1}{125} \left[\begin{array}{ccc} 41 & 42 & 42 \\ 42 & 41 & 42 \\ 42 & 42 & 41 \end{array} \right] \\ & \vdots & \\ M^\infty & = & \frac{1}{3} \left[\begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{array} \right] \end{array}

and v = \left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right] so that

 \begin{array}{ccc} v = \left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right] & \iff & f_0(x) = 1 \\ T_1(v) = \frac{1}{5} \left[ \begin{array}{ccc} 1 & 2 & \overline{2} \end{array} \right] & \iff & f_1(x) = \frac{1}{5} + \frac{4}{5} x + \frac{6}{5} x^2 \\ T_2(v) = \frac{1}{25} \left[ \begin{array}{ccc} 9 & 8 & \overline{8} \end{array} \right] & \iff & f_2(x) = \frac{9}{25} + \frac{16}{25} x + \frac{24}{25} x^2 \\ T_3(v) = \frac{1}{125} \left[ \begin{array}{ccc} 41 & 42 & \overline{42} \end{array} \right] & \iff & f_3(x) = \frac{41}{125} + \frac{84}{125} x + \frac{126}{125} x^2 \\ & \vdots & \\ \\ T_\infty (v) = \frac{1}{3} \left[ \begin{array}{ccc} 1 & 1 & \overline{1} \end{array} \right] & \iff & f_\infty(x) = \frac{1}{3} + \frac{2}{3}x + x^2 \end{array}

Example [Open Path Core Transform]
(July 7, 2017)
Again, in the core parabolic subset, an open path going from  \left[ \begin{array}{ccc} 0 & 0 & \overline{1} \end{array} \right] \cdot D(x) to  \left[ \begin{array}{ccc} 1 & 0 & \overline{0} \end{array} \right] \cdot D(x) can be constructed using the parameter \theta:

 \left[ \begin{array}{ccc} \theta & 0 & \overline{1 - \theta} \end{array} \right] \cdot D(x) \iff f_\theta(x) = \theta + 3 \cdot \left(1 - \theta \right)x^2

with  \theta \in \left[ 0,1\right) or  \theta \in \left[ 0,1\right] .

Example [Circular Path Core Transform]
(July 7, 2017)
The largest circular path, in the core parabolic subset, is:

 \left[ \begin{array}{ccc} \left(1 - \frac{1}{\sqrt{2}} \right) \cdot \left( \cos(\theta) + 1 \right) & \left(1 - \frac{1}{\sqrt{2}} \right) \cdot \left( \sin(\theta) + 1 \right) & 1 - \left(1 - \frac{1}{\sqrt{2}} \right) \cdot \left( \cos(\theta) + \sin(\theta) + 2 \right) \end{array} \right] \cdot D(x)

for \theta \in \left[0,1 \right) or \theta \in \left[ 0,1 \right].

On Shifted Legendre Polynomial Coefficients

March 27th, 2017 No comments

So here it is: the shifted Legendre polynomial coefficients are:

 a(k-1,n) = a_{k-1}^n = (-1)^k k \left( \prod_{m \neq 1} \frac{m-1}{m-n-1} \prod_{m \neq k} \frac{m-n-1}{m-k} \prod_{m \neq n} \frac{m-n-k}{m-n-1} \right)

with k \leq n and m = 1 \ldots n, so that

\tilde{P}_n(x) = \sum_{i = 0}^n a_i \cdot x^i

I'd love to hear your ideas of a proof.  Mine is a very particular thorny meticulous one. I may be off on the sign, but the absolute value is right. Let me know!

On Convergent Sequences

November 13th, 2016 No comments

So here's a question that I've been preoccupied with in the last couple of days.  Suppose I have a convergent (at the sum) sequence, if you like a geometric sequence, say  1/2, 1/4, 1/8, ... = 1.  If one were to multiply the elements of the sequence by its corresponding index, would this still be a convergent sequence? In this case, I'm concerned with 1/2 * 1, 1/4 * 2, 1/8 * 3, ... and its convergence (at the sum).  I wish to explore whether there are particular circumstances in which this is indeed the case.

I'll leave the question open ended for a bit.  I have solved this via derivative considerations, I think (I do not mean it as a pun... I think I have solved this by considering derivatives of functions).