Archive

Archive for January, 2011

On Patch Stationariness (RWLA,MCT,GT,AM Part VI)

January 16th, 2011 No comments

In my previous posts, I have been discussing how we can extend functional analysis a little bit by "inventing" continuous matrices (surfaces) which contain all the information we may want on how to transform, in a special case, probability distributions from one to another, and I have tried, by reason of analogy, to extend Markov theory as well.  In this special case, I have been talking about how a surface "continuous collection of distributions" can reach steady-state: by self-combining these surfaces over and over; I even showed how to obtain a couple steady-states empirically by calculating patch powers specifically and then attempting to infer the time evolution, quite successfully in one case. The usual Markov treatment suggests another way to obtain the steady-state (the limiting transition probability matrix), by finding a stationary distribution so that left multiplying the vector  \mathbf{\widehat p}  by the transition probability matrix  P gives us  \mathbf{\widehat p} .  Within the discrete transition probability matrix context, a vector  \mathbf{\widehat p} with this property is also a (left) eigenvector of  P with eigenvalue 1.  See for example Schaum's series Probability, Random Variables, and Random Processes p. 169, as well as Laurie Snell's chapter 11 on Markov Chains on his online Probability book. An important theorem says that the limiting transition probability matrix  \lim_{n \rightarrow \infty} P^n = \mathbf{\widehat P} is a matrix whose rows are identical and equal to the stationary distribution  \mathbf{\widehat p} .  To calculate the stationary distribution (and the limiting transition probability matrix) one would usually solve a system of equations. For example, if:

 P = \left[ \begin{array}{cc} \frac{3}{4} & \frac{1}{4} \\ \frac{1}{2} & \frac{1}{2} \end{array} \right]

the stationary distribution

 \mathbf{\widehat p} P = \mathbf{\widehat p}

looks explicitly like:

 \left[ \begin{array}{cc} p_1 & p_2 \end{array} \right] \left[ \begin{array}{cc} \frac{3}{4} & \frac{1}{4} \\ \frac{1}{2} & \frac{1}{2} \end{array} \right] = \left[ \begin{array}{cc} p_1 & p_2 \end{array} \right]

in other words, the system:

 \frac{3}{4} p_1 + \frac{1}{2} p_2 = p_1

 \frac{1}{4} p_1 + \frac{1}{2} p_2 = p_2

each of which gives  p_1 = 2 p_2 and is solvable if we notice that  p_1 + p_2 = 1 , yielding  \mathbf{\widehat p} = \left[ \begin{array}{cc} \frac{2}{3} & \frac{1}{3} \end{array} \right] , and

 \mathbf{\widehat P} = \left[ \begin{array}{c} \mathbf{\widehat p} \\ \mathbf{\widehat p} \end{array} \right]

In this post, I want to set up an algorithm to calculate the stationary surface (steady-state) of patches as I've defined them, following in analogy the above argument.  To do so, I revisit both of my previous examples, now calculating the steady state from this vantage point.  The fact that we can define such an algorithm in the first place has ginormous implications, in the sense that we can define stationary function distributions that would seem therefore to be eigen(patch(ix))vectors (corresponding to eigen(patch(ix))values) of surface distributions, and we can seemingly also solve a continuously infinite quantity of independent equations, however strange this actually sounds.

Example 1, calculating the stationary patch  p_{\infty}(x,y) when  p_1(x,y) = 2 x - \frac{2 x y^3}{3} + x^2 y^3 .

I have already shown that  p_1(x,y) is indeed a patch because  \int_0^1 p_1(x,y) dx = 1 , for any choice of  y .

Suppose there exists a distribution defined as always on  x \in [0,1] , say  a(x) , so that

 a(x) \star p(x,y) = a(x) .  Explicitly,  \int_0^1 a(1 - y) \cdot \left( 2 x - \frac{2 x y^3}{3} + x^2 y^3 \right) dy = a(x)  We can break up the integral as:

 2 x \int_0^1 a(1-y) dy - \frac{2 x}{3} \int_0^1 y^3 a(1-y) dy + x^2 \int_0^1 y^3 a(1-y) dy = a(x)

The first part, we've seen many times, adds up to one because  a(x) is a probability distribution, so let's rewrite the whole thing as:

 2 x - \left( \frac{2 x}{3} - x^2 \right) \int_0^1 y^3 a(1 - y) dy = a(x)

The integral is in reality just a constant, so we have that a(x) looks something like:

 2 x - \left( \frac{2 x}{3} - x^2 \right) B = a(x) if we let

 B = \int_0^1 y^3 a(1 - y) dy

Now this integral in  y , though it is a constant, is seemingly impossible to solve without more knowledge of  a(1-y) ; but the truth of the matter is we have everything we need because we have a specification of  a(x) .  The crucial thing to notice is that derivatives of  a(x) do not exist "eternally," because  a(x) is a polynomial of maximal degree 2.  Thus we can attempt integration by parts and try to see where this takes us.  The tabular method gives us an organized way to write this out:

 \begin{array}{ccccc} | & Derivatives & | & Integrals & | \\ | & a(1-y) & | & y^3 & | \\ | & -a'(1-y) & | & \frac{y^4}{4} & | \\ | & a''(1-y) & | & \frac{y^5}{20} & | \\ | & 0 & | & \frac{y^6}{120} & | \\ | & \vdots & | & \vdots & | \end{array}

and, remembering the alternating sign when we multiply, we get the series:

 \frac{a(1-y) y^4}{4} + \frac{a'(1-y) y^5}{20} + \frac{a''(1-y) y^6}{120} + 0 + \ldots \arrowvert_0^1

The zeroth substitution of the lower limit of the integral gives us all zeroes, but the one-substitution gives us the interesting "pasquali series":

 \frac{a(0) }{4} + \frac{a'(0)}{20} + \frac{a''(0)}{120} + 0 + \ldots

which asks of us to evaluate  a(x) and its derivatives (until just before it vanishes) at zero:

 \begin{array}{ccc} a(x) & = & 2 x - \left( \frac{2 x}{3} - x^2 \right) B \\ a'(x) & = & 2 - \frac{2 B}{3} + 2 B x \\ a''(x) & = & 2 B \end{array}

 \begin{array}{ccc} a(0) & = & 0 \\ a'(0) & = & 2 - \frac{2 B}{3} = \frac{6 - 2 B}{3}\\ a''(0) & = & 2 B \end{array}

All that's left now is to substitute back into the series:

 B = \frac{\frac{6 - 2 B}{3}}{20} + \frac{2 B}{120} = \frac{1}{10} - \frac{B}{60} solves to  B = \frac{6}{61} which is what we want (I tested the following code with Wolfram Alpha: "integrate [[2(1-y) - (.0983607)(((2(1-y))/3) - (1-y)^2)]*[2x - (2 x y^3)/3 + x^2 y^3]] dy from y = 0 to 1" and obtained the same numeric decimal value at the output).

We have therefore that  a(x) = x - \left( \frac{2 x}{3} - x^2 \right) \frac{6}{61} is a stationary distribution, and the steady-state patch would seem to be  p_\infty(x,y) = a(x) .  I personally think this is very cool, because it validates several propositions: that we can find steady-state patches analytically (even when we may think we have a (continuously) infinite system to solve, it will reduce essentially to a (countable!) series estimable provided the "pasquali" series converges) by a means other than finding the patch powers and attempting to see a pattern, prove perhaps by induction, and then take the limit as patch powers go to infinity, much as I did in my previous post.  It also validates the "crazy" idea that (certain?) special surfaces like patches have eigen(patch(ix))vectors, as arguing  a(x) would suggest exist, and which we would have to obtain, in discrete matrixes, by solving a finite system of equations (and which we did here, again, by solving the "pasquali" series).

Example 2.  In my second example, take the patch  1 - cos(2 \pi x) cos( 2 \pi y) . Again we are looking at a patch because  \int_0^1 p(x,y) dx = 1 for any value of  y .  To establish the steady-state surface, or  p_\infty(x,y) , we proceed as before and write

 \int_0^1 a(1-y) \left( 1 - cos(2 \pi x) cos(2 \pi y) \right)dy = a(x) , or, explicitly,

 \int_0^1 a(1-y) dy - cos(2 \pi x) \int_0^1 a(1-y) cos(2 \pi y) dy = a(x)

The first integral adds up to 1 by hypothesis, where the second one is zero after integrating by parts:

 \begin{array}{ccccc} | & Derivatives & | & Integrals & | \\ | & a(1-y) & | & cos(2 \pi y) & | \\ | & -a'(1-y) & | & \frac{sin(2 \pi y)}{2 \pi} & | \\ | & a''(1-y) & | & \frac{-cos(2 \pi y)}{4 \pi} & | \\ | & -a'''(1-y) & | & \frac{-sin(2 \pi y)}{8 \pi} & | \\ | & \vdots & | & \vdots & | \end{array}

so we have:

 \frac{a(1-y) sin(2 \pi y)}{2 \pi} - \frac{a'(1-y) cos(2 \pi y)}{4 \pi} - \frac{a''(1-y) sin (2 \pi y)}{8 \pi} + \ldots \vert_0^1 and the awesome-slash-interesting "pasquali series"

 -\frac{a'(0)}{4 \pi} + \frac{a'''(0)}{16 \pi} - \frac{a^v (0)}{64 \pi} + \ldots from which we must subtract by the Fundamental Theorem of Calculus

 -\frac{a'(1)}{4 \pi} + \frac{a'''(1)}{16 \pi} - \frac{a^v(1)}{64 \pi} + \ldots

So we are left with  B = \frac{a'(1)}{4 \pi} - \frac{a'(0)}{4 \pi} + \frac{a'''(0)}{16 \pi} - \frac{a'''(1)}{16 \pi} + \frac{a^v(1)}{64 \pi} - \frac{a^v(0)}{64 \pi} + \ldots and also with

 \begin{array}{ccc} a(x) & = & 1 - cos(2 \pi x) B \\ a'(x) & = & 2 \pi sin(2 \pi x) B \\ a''(x) & = & 4 \pi cos(2 \pi x) B \\ a'''(x) & = & -8 \pi sin(2 \pi x) B \\ \vdots & \vdots & \vdots \end{array}

To show this thoroughly, we should prove by induction that every odd derivative of  a(x) contains a  sin term (or we can attempt an argument by periodicity of the derivative, as we do), and so evaluating such at 0 and at 1 literally causes the term to vanish, and leaving us with the fact that  B = 0 and that  a(x) = 1 .  Therefore, as before,  p_\infty(x, y) = a(x) = 1 , and this is consistent with my derivation in the previous post, too.