## On Patch Stationariness (RWLA,MCT,GT,AM Part VI)

In my previous posts, I have been discussing how we can extend functional analysis a little bit by "inventing" continuous matrices (surfaces) which contain all the information we may want on how to transform, in a special case, probability distributions from one to another, and I have tried, by reason of analogy, to extend Markov theory as well. In this special case, I have been talking about how a surface "continuous collection of distributions" can reach steady-state: by self-combining these surfaces over and over; I even showed how to obtain a couple steady-states empirically by calculating patch powers specifically and then attempting to infer the time evolution, quite successfully in one case. The usual Markov treatment suggests another way to obtain the steady-state (the limiting transition probability matrix), by finding a stationary distribution so that left multiplying the vector by the transition probability matrix gives us . Within the discrete transition probability matrix context, a vector with this property is also a (left) eigenvector of with eigenvalue 1. See for example Schaum's series Probability, Random Variables, and Random Processes p. 169, as well as Laurie Snell's chapter 11 on Markov Chains on his online Probability book. An important theorem says that the limiting transition probability matrix is a matrix whose rows are identical and equal to the stationary distribution . To calculate the stationary distribution (and the limiting transition probability matrix) one would usually solve a system of equations. For example, if:

the stationary distribution

looks explicitly like:

in other words, the system:

each of which gives and is solvable if we notice that , yielding , and

In this post, I want to set up an algorithm to calculate the stationary surface (steady-state) of patches as I've defined them, following in analogy the above argument. To do so, I revisit both of my previous examples, now calculating the steady state from this vantage point. The fact that we can define such an algorithm in the first place has ginormous implications, in the sense that we can define stationary function distributions that would seem therefore to be eigen(patch(ix))vectors (corresponding to eigen(patch(ix))values) of surface distributions, and we can seemingly also solve a continuously infinite quantity of independent equations, however strange this actually sounds.

Example 1, calculating the stationary patch when .

I have already shown that is indeed a patch because , for any choice of .

Suppose there exists a distribution defined as always on , say , so that

. Explicitly, We can break up the integral as:

The first part, we've seen many times, adds up to one because is a probability distribution, so let's rewrite the whole thing as:

The integral is in reality just a constant, so we have that a(x) looks something like:

if we let

Now this integral in , though it is a constant, is seemingly impossible to solve without more knowledge of ; but the truth of the matter is we have everything we need because we have a specification of . The crucial thing to notice is that derivatives of do not exist "eternally," because is a polynomial of maximal degree 2. Thus we can attempt integration by parts and try to see where this takes us. The tabular method gives us an organized way to write this out:

and, remembering the alternating sign when we multiply, we get the series:

The zeroth substitution of the lower limit of the integral gives us all zeroes, but the one-substitution gives us the interesting "pasquali series":

which asks of us to evaluate and its derivatives (until just before it vanishes) at zero:

All that's left now is to substitute back into the series:

solves to which is what we want (I tested the following code with Wolfram Alpha: "integrate [[2(1-y) - (.0983607)(((2(1-y))/3) - (1-y)^2)]*[2x - (2 x y^3)/3 + x^2 y^3]] dy from y = 0 to 1" and obtained the same numeric decimal value at the output).

We have therefore that is a stationary distribution, and the steady-state patch would seem to be . I personally think this is very cool, because it validates several propositions: that we can find steady-state patches analytically (even when we may think we have a (continuously) infinite system to solve, it will reduce essentially to a (countable!) series estimable provided the "pasquali" series converges) by a means other than finding the patch powers and attempting to see a pattern, prove perhaps by induction, and then take the limit as patch powers go to infinity, much as I did in my previous post. It also validates the "crazy" idea that (certain?) special surfaces like patches have eigen(patch(ix))vectors, as arguing would suggest exist, and which we would have to obtain, in discrete matrixes, by solving a finite system of equations (and which we did here, again, by solving the "pasquali" series).

Example 2. In my second example, take the patch . Again we are looking at a patch because for any value of . To establish the steady-state surface, or , we proceed as before and write

, or, explicitly,

The first integral adds up to 1 by hypothesis, where the second one is zero after integrating by parts:

so we have:

and the awesome-slash-interesting "pasquali series"

from which we must subtract by the Fundamental Theorem of Calculus

So we are left with and also with

To show this thoroughly, we should prove by induction that every odd derivative of contains a term (or we can attempt an argument by periodicity of the derivative, as we do), and so evaluating such at 0 and at 1 literally causes the term to vanish, and leaving us with the fact that and that . Therefore, as before, , and this is consistent with my derivation in the previous post, too.