Archive

Posts Tagged ‘Functions’

On Infinite Term Functions with Converging Integrals, Part II

May 5th, 2014 No comments

This post is a continuation of a previous one. I have developed several preparatory (in preparation of other) claims or theorems:  The first and second claim show that a particular collection of finite polynomial functions have area of 1 in the interval [0,1], and are hence also Pasquali patches.  The third and fourth shows that the finite sum of any such functions actually have converging integrals in the same interval.  The corollaries show that this is not the case if the sum is infinite.  A fun way to summarize this information is soon after developed, and these observations, though simple, lead us to classify all Pasquali patches which are functions of x alone, and therefore all stationary/limiting/stable surfaces, eigenfunctions or wavevectors (from the quantum mechanics point-of-view).

Claim 1. Take f_i(x) = (i+1) x^i with i = 0 \ldots n.  Then \int_0^1 f_i(x) \, dx = 1, \forall i.

Proof by Definition of Integration (Inducing).  We show that \int_0^1 f_0(x) \, dx = 1. The expression equals 

\int_0^1 x^0 \, dx = \int_0^1 1 \, dx = x \left. \right\vert_0^1 = 1

 We assume that the kth element \int_0^1 f_k(x) \, dx = 1 although we readily know by the definition of integration that such is true, since 

\int_0^1 (k+1) x^k \, dx = x^{k+1} \left. \right\vert_0^1 = 1^{k+1} = 1

The exact same definition argument applies to the k+1th element and 

\int_0^1 (k+2) x^{k+1} \, dx = x^{k+2} \left. \right\vert_0^1 = 1^{k+2} = 1

Claim 2. The functions f_i(x) = (i+1) x^i with i = 0 \ldots n are Pasquali patches.

Proof.  A Pasquali patch is a function p(x,y) so that \int_0^1 p(x,y) dx = 1.  Let p(x,y) = f_i(x).  Since by Claim 1  \int_0^1 f_i(x) \, dx = 1, \forall i = 1 \ldots n, then applying the definition means  f_i(x) = (i+1) x^i are Pasquali patches \forall i= 0 \ldots n .

Claim 3.  The finite polynomial g(x) = \sum_{i=0}^n (i+1) x^{i} converges in area from [0,1] to n+1.

Proof.  We are looking for

\int_0^1 \sum_{i=0}^n (i+1) x^i \, dx

.  The sum is finite so it converges, and there is no issue exchanging the order of the sum and integral. Thus:

\sum_{i=0}^n \int_0^1 (1+i) x^i \, dx =\sum_{i=0}^n \left( x^{i+1} \left. \right\vert_0^1 \right) = \sum_{i=0}^n 1^{i+1} =\sum_{i=0}^n 1 = n+1

Claim 4. Pick n functions from the pool of f_i(x) = (i+1) x^i.  For example, pick f_3(x), f_5(x), and f_7(x).  Create the function h(x) = \sum_i f_i(x).  Then \int_0^1 h(x) \, dx = n.

Proof by induction.  Since by Claim 2 all f_i(x) are Pasquali patches, it follows their integral is 1 in the interval (Claim 1).  Picking 1 function from the pool thus gives an integral of 1 in the interval.  Suppose that picking k functions gives k units at the integral in the interval. Now pick k+1 functions.  The first k functions give k units at the integral in the interval, and the 1 additional function contributes 1 unit at the integral in the interval.  Thus k+1 functions contribute k+1 units at the integral in the interval.

Corollary 1. The infinite polynomial a(x) = \sum_{i=0}^\infty (i+1) x^i diverges in area in the interval from [0,1].

Proof.  Take

\int_0^1 \left( \lim_{n \to \infty} \sum_{i=0}^n (1+i) x^i \right) \, dx =\lim_{n \to \infty} \int_0^1\sum_{i=0}^n (1+i) x^i \, dx

Here exchanging the order of limit and integral is justified by the fact that, term-wise, the integral converges. Next  

\lim_{n \to \infty} n+1 = \infty

Here the second to last step is justified by Claim 3.

Corollary 2.  The infinite polynomial a(x) - h(x) diverges in area in the interval from [0,1].

Proof.  Take the limit

 \lim_{n \to \infty} \left[ a(x) - h(x) \right]

Taking n to infinity applies to a(x) only which we know diverges by Corollary 1.  The same limit  has no effect on h(x) as the sum it is composed of is finite and adds up to an integer constant, say m.  We conclude that any infinite collection of terms of f_i(x) diverges, even when a finite number of them may be absent from the sum.

And now sushi.

Corollary 3.  The infinite polynomial  a(x) - b(x) diverges in area in the interval from [0,1] with a(x), b(x) are infinite polynomials constructed by sums of functions picked from the pool f_i(x) = (i+1) x^i and with no repetitions. (Note that the difference of these two infinite polynomials must also be infinite).

Proof. Since the a(x) - b(x) is an infinite polynomial, the integral of such will be an infinite string of ones since the functions it contains are f_i(x) and these are Pasquali patches (Claim 2) and there are no repetitions.  Such infinite sum of ones clearly diverges.

Remark 1.  We can view what we have learned in the claims from a slightly different vantage point.  Create the infinite identity matrix

 I = \left[ \begin{array}{cccc} 1 & 0 & 0 & \ldots \\ 0 & 1 & 0 & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right]

Next create the following polynomial differential vector

 D =\left[ \begin{array}{c} 1 \\ 2x \\ 3x^2 \\ \vdots \end{array} \right]

It is clear that

 \int_0^1 I_i \cdot D \, dx =1

for all rows  i of  I .  We can omit the little  i because this definition applies to all rows and:

 \int_0^1 I \cdot D \, dx = \int_0^1 D \, dx= \left[ \begin{array}{c} 1 \\ 1 \\ \vdots \end{array} \right] = \bf{1}

This of course summarizes Claims 1 and 2.  Next, define the matrix J consisting of rows which are finite sums of rows of I (so that each row of J consists of a finite number of ones at any position, namely n such coming from n picked rows of I).  Claims 3 and 4 are summarized in the statement  

 \int_0^1 J\cdot D \, dx = S

where S is the vector consisting of the sum of the rows of J, which, since it is made up of a finite number of ones at each row, adds up to a constant integer at each row:

S = \left[ \begin{array}{c} n_1 \\ n_2 \\ \vdots \end{array} \right]

 Finally, the corollaries can be summarized in the statement in which we create a matrix  K consisting of rows with a finite number of zeroes (and an infinite number of ones) or an infinite number of zeroes but an infinite number of ones as well.  It is clear then that

 \int_0^1 K\cdot D \, dx = \infty

Remark 2. The cool thing about this notation is that it gives us power to conclude several interesting things.  For example, scaling of matrices  I and  J as by a constant  t shows convergence at the integral in the interval  \left[ 0,1 \right] of every one of the scaled sums  represented by the rows of such matrices.  Thus:

Corollary 4. Let  I^* = t \cdot I and J^* = t \cdot J with  t is a scaling factor.  Then the area of each of the infinitely many polynomials represented by the matrices I^*, J^* dot D in the interval from 0 to 1 converge.

Proof.  On the one hand, we have 

 \int_0^1 I^* \cdot D \, dx = \int_0^1 t \cdot I \cdot D \, dx =t \left( \int_0^1 I \cdot D \, dx \right) = \bf{t}

 On the other hand,

 \int_0^1 J^* \cdot D \, dx =\int_0^1 t \cdot J\cdot D \, dx = t \left( \int_0^1 J\cdot D \, dx \right) = t \cdot S =\left[ \begin{array}{c} t \cdot n_1 \\ t \cdot n_2 \\ \vdots \end{array} \right]

Remark 3. Next consider the infinite-matrix formed by convergent sequences (at the sum) at each row,

A = \left[ \begin{array}{cccc} \vdots & \vdots & \vdots & \vdots \\ 1 & \frac{1}{2^2} & \frac{1}{3^2} & \ldots \\ \vdots & \vdots & \vdots & \vdots \end{array} \right]

Depicted is the reciprocals of squares which we know converges at the sum (Basel problem), simply for illustration, but all convergent sequences would be in the ith row of A.  We have 

 \int_0^1 A_i\cdot D \, dx = \sum_j a_{i,j}

is convergent by definition.  The cool thing is we can easily prove in one swoop that all sequences that are scaled will also converge at the sum (and the infinite polynomials with coefficients A \cdot D have converging area in the interval from 0 to 1).

Corollary 5. Let  A^* = t \cdot A with  t is a scaling factor.  Then the area of each of the infinitely many polynomials represented by the matrix entries of A^* \cdot D in the interval from 0 to 1 converge.

Proof.  We have 

 \int_0^1 A^*_i\cdot D \, dx = \sum_j a_{i,j}

for all i, so this equals 

 \int_0^1 t \cdot A_i\cdot D \, dx = t \left(\int_0^1 A_i\cdot D \, dx \right) = t \cdot\sum_j a_{i,j}

for all i.

All of these small and obvious observations lead to this:

Claim 5. The Grand Classification Theorem of Limiting Surfaces (A General and Absolutely Complete Classification of Pasquali patches which are functions of x alone).  All Pasquali patches which are functions of x alone (and therefore possible limiting surfaces) take the form

 p(x) = \frac{A_i \cdot D}{\sum_j a_{i,j}}

Proof. We have that, since such  p(x) is a Pasquali patch, it must conform to the definition.  Thus 

 \int_0^1 p(x) \, dx = \int_0^1 \frac{A_i \cdot D}{\sum_j a_{i,j}} \, dx = \frac{\int_0^1 A_i \cdot D \, dx}{\sum_j a_{i,j}} = \frac{\sum_j a_{i,j}}{\sum_j a_{i,j}} = 1

shows this is indeed the case.  To show that "all" Pasquali patches that are functions of x alone are of the form of p(x), we argue by contradiction.  Suppose that there is a Pasquali patch that is a function of x alone which does not take the form of p(x).  It couldn't possibly be one such that is a finite polynomial, since  A_i was defined to be that matrix formed by all convergent sequences at the sum at each row and it can be scaled any which way we like, and this includes sequences with a finite number of nonzero coefficients.  But now it couldn't be any infinite polynomial either, by the same definition of  A_i which includes infinite sequences so that \sum_j a_{i,j} is convergent.  Thus it must be a polynomial formed by dotting divergent sequences (at the sum), but all such have been happily excluded from the definition of A.

Remark 4.  Thus, EVERY convergent series has an associated Pasquali patch (which is solely a function of  x), and vice versa, covering the totality of the Pasquali patch functions of x universe and the convergent series universe bijectively.

Remark 5.  Notice how the definition takes into account Taylor polynomial coefficients (thus all analytic functions are included) and those that are not (even those that are as yet unclassified), and all sequences which may be scaled by a factor as well.

Claim 6. Let f(x) is Maclaurin-expandable so that

 f(x) = \sum_{n=0}^\infty \frac{f^n(0) x^n}{n!}

Then

 \sum_{n=0}^\infty\frac{f^n(0)}{(n+1)!} = \int_0^1 f(x) \, dx

Proof.  

\int_0^1 f(x) \, dx = \int_0^1 A_i \cdot D \, dx

for some i row of A.  Such a row would have to be of form

 A_i = \left[ \begin{array}{cccc} f(0) & \ldots & \frac{f^n(0)}{n! (n+1)} & \ldots \end{array} \right]

 Then the integral

\int_0^1 A_i \cdot D \, dx = \sum_j a_{i,j} =\sum_{n=0}^\infty \frac{f^n(0)}{n! (n+1)} = \sum_{n=0}^\infty \frac{f^n(0)}{(n+1)!}

Remark 6. Notice that all Maclaurin-expandable functions converge in area (have stable area) in the interval from 0 to 1, a remarkable fact.

Example 1.  Take

f(x) = e^x = \sum_{n=0}^\infty \frac{x^n}{n!}

 By applying Claim 6, it follows that

 \sum_{n=0}^\infty \frac{1}{(n+1)!} = \int_0^1 e^x \, dx = e - 1

Remark 7. Now we have a happy way to construct (any and all) Pasquali patches which are functions of x alone, merely by taking a sequence which is convergent at the sum.

Remark 8. Quantum mechanically, we now know all possible shapes that a stationary (limiting) eigen wavevector can take.

Remark 9. This gives us extraordinary power to calculate convergent sums via integration, as the next examples show.  It also gives us extraordinary power to express any number as an infinite sum, for example.

 

On Infinite-Term Functions with Converging Integrals

August 9th, 2013 No comments

Claim.  Take the function  f \colon [0,1] \to \mathbb{R} with rule

 f(x) =\sum_{i=0}^\infty \frac{x^i}{i+1} = 1 + \frac{x}{2} + \frac{x^2}{3} + \ldots

 Then

\int_0^1 f(x) \, dx = \frac{\pi^2}{6}

Proof. The direct way to see this is by plugging in: 

\int_0^1 \sum_{i=0}^\infty \frac{x^i}{i+1}\, dx = \sum_{i=0}^\infty\left. \left( \frac{x^{i+1}}{(i+1)^2} \right\vert_0^1 \right) = \sum_{i=0}^\infty \frac{1}{(i+1)^2} = \sum_{i=1}^\infty \frac{1}{i^2}

which is a converging series (to \frac{\pi^2}{6} ). We justify taking the integral inside the sum precisely under the understanding of the convergence of the series (we can prove by induction that the integral of the partial sums of the function are equivalent to the partial sums of the series  \frac{1}{n^2} which is known to converge).

The surprising fact is not the simplicity of the proof, but that an infinite term function can have a stable area over a definite interval. If you think about it this may not be so novel, in that Taylor representations converge to a specific (usually elementary) function which may have calculable area (in the interval 0 to 1)... but there are two things to keep in mind: first, some infinite term functions with definite area in 0 to 1 may not be Taylor representations of elementary functions, and, second, a stable area is definitely not a general observation for infinite term functions.  For example,

Claim. The function  g \colon [0,1] \to \mathbb{R} with rule g(x) = 1 + x + x^2 + \ldots = \sum_{i=0}^\infty x^i does not have a converging integral in the interval 0 to 1.

Quick Proof.  By taking the integral of the function partial sums, we get the sequence

 \begin{aligned} 1 & \\ 1 & + 1/2 \\ 1 & + 1/2 + 1/3 \\ etc. & \end{aligned}

which we can show (through induction) equivalent to the divergent harmonic series.

The point that I'm trying to make here is that we can define infinite-term functions which converge in area over an interval and may or may not be Taylor representations of other elementary functions.  This observation comes from considerations of function eigenvalues as I've defined them in Compendium.

Here are other convergent-in-area in the interval 0 to 1 infinite-term functions:

1 - x + x^2 - x^3 + \ldots

with alternating coefficients gives the convergent alternating series 1 - 1/2 + 1/3 - \ldots = ln(2)

1 - \frac{2 x}{3} + \frac{3 x^2}{5} - \frac{4 x^3}{7} + \ldots

also with alternating coefficients and odd denominators gives the convergent Leibniz alternating series 1 - 1/3 + 1/5 - 1/7 + \ldots = \frac{\pi}{4}

1 +\frac{2 x}{1} + \frac{3 x^2}{2} + \frac{4 x^3}{3} + \frac{5 x^4}{5} +\ldots

with denominators are the Fibonacci numbers yield the convergent Fibonacci series at the integral 1 + 1 +1/2 + 1/3 + 1/5 + \ldots = \psi

And in fact, we can construct converging at the integral in the interval 0 to 1 infinite-term functions simply by letting the coefficients of each term be a general index for the term (counting number) times the convergent sequence term.  Thus, recall from my previous post that the following infinite sum converges

 \sum_{i=1}^\infty \frac{1}{(2 i)!} = \frac{(e - 1)^2}{2 e}

which implies we can create the infinite term function

 h(x) = \sum_{i=0}^\infty \frac{(i + 1) x^i}{(2 (i+1))!} = \frac{1}{2} + \frac{2 x}{4!} + \frac{3 x^2}{6!} + \ldots

which of course converges to  \frac{(e - 1)^2}{2 e} in area in the interval  [0, 1] .

The function eigenvalue idea can be extended to any interval of interest (even infinite ones), but this is a subject of further investigation.

An interesting notion arises when we think of a number as the area under the curve of an infinite term function.  The manner by which we approach convergently that number describes the shape of the curve of the infinite term function in that interval.  I shall put pretty pictures forthwith to illustrate the concept.

On Patchixes and Patches - or Pasqualian Matrixes - (RWLA,MCT,GT,AM Part II)

October 10th, 2010 3 comments

For the last few months I have been thinking about several very curious properties of patchixes and patches (mentioned here); in particular, having studied patch behavior in a "continuous Markov chain" context, and, at having been drinking a bowl of cereal and  observing the interesting movement of the remaining milk, it hit me: a patch could certainly describe milk movement at particular time steps.  It is my hope to try to elucidate this concept a little better here today.  In particular, I think I have discovered a new way to describe waves and oscillations, or rather, "cumulative movement where the amount of liquid is constant" in general, but, in my honest belief, I think this new way and the old way converge in limit (this based on my studies, here and here, or discrete Markov chains at the limit of tiny time steps, so that time is continuous), although it is a little bit unclear to me how at the moment.  It is my hope that this new way not only paves the way for a new and rich field of research, but I foresee it clarifying studies in, for example, turbulence, and, maybe one day, Navier-Stokes related concepts.  This last part may sound a little lofty and ambitious, but an approach in which, for example, vector fields of force or velocity need to be described for every particle and position of space, with overcomplicated second and third order partial derivatives, is in itself somewhat ambitious and lofty, and often prohibitive for finding exact solutions;  perhaps studying particle accumulations through a method of approximation, rather than individual particles, is the answer.

I want to attempt to describe the roadmap that led me to the concept of a patchix (pasqualian matrix) in the first place; it was in the context of discrete Markov chains.  Specifically, I thought that, as we study linear algebra, for a function or transformation  T(\textbf{v}) , with  \textbf{v} is an n-vector with  n entries (finite), we have  T can be described succinctly by an  n \times n matrix.  Such a matrix then, converts  \textbf{v} into another n-dimensional vector, say  \textbf{w} .  This field is very well studied of course: in particular, invertible transformations are very useful, and many matrixes can be used to describe symmetries, so that they underlie Group Theory:

 \textbf{v} \underrightarrow{T} \textbf{w}

Another useful transformation concept resides in  l_2 , the space of sequences whose lengths squared (dot product with itself) converge, that was used, for example by Heisenberg, in quantum mechanics, as I understand it.  For example, the sequence  x_1 + x_2 + \ldots can be transported to another  y_1 + y_2 + \ldots via  T , as by  T(x_1 + x_2 + \ldots) = y_1 + y_2 + \ldots .  Key here then was the fact that  x_1^2 + x_2^2 + \ldots converged, so that  \sqrt{x_1^2 + x_2^2 + \ldots} , the norm, is defined.  Also the dot product  x_1 y_1 + x_2 y_2 + \ldots converges (why?).  Importantly, however, this information points in the direction that a transformation matrix could be created for  T to facilitate computation, with an infinite number of entries, so that indeed a sequence is taken into another in this space in a manner that is easy and convenient.  I think this concept was used by Kolmogorov in extending Markov matrices as well, but I freely admit I am not very versed in mathematical history.  Help in this regard is muchly appreciated.

In function space such as  C^{\infty}[0,1] , the inner product of, say, f(x) with g(x) is also defined, as  \langle f(x), g(x) \rangle = \int_0^{1} f(x) \cdot g(x) dx , point-wise continuous multiplications of the functions summed absolutely convergently (which results from the integral).  Then the norm of  f(x) is  \sqrt{\langle f(x), f(x) \rangle} = \sqrt{\int_0^{1} f(x)^2 dx} .  The problem is of course no convenient "continuous matrix" that results in the transform  T(f(x)) = g(x) , although transforms of a kind can be achieved through a discrete matrix, if its coefficients represent, say, the coefficients of a (finite) polynomial.  Thus, we can transform polynomials into other polynomials, but this is limiting in scope in many ways.

The idea is that we transform a function to another by point-wise reassignment: continuously.  Thus the concept of a patchix (pasqualian matrix) emerges, we need only mimic the mechanical motions we go through when conveniently calculating any other matrix product.  Take a function  f(x) defined continuously on  [0,1] , send  x \rightsquigarrow 1-y so that  f(1-y) is now aligned with the y-axis. From the another viewpoint, consider  f(1-y) as  f(1-y,t) so that, at any value of  t , the cross-section looks like  f .  Define a patchix  p(x,y) on  [0,1] \times [0,1] .  Now "multiply" the function (actually a patchix itself from the different viewpoint) with the patchix as  \int_{0}^{1} f(1-y) \cdot p(x,y) dy = g(x) to obtain  g(x) .  The patchix has transformed  f(x) \rightsquigarrow g(x) as we wanted.  I think there are profound implications from this simple observation; one may now consider, for example, inverse patchixes (or how to get  g(x) back to  f(x) , identity patchixes, and along with these one must consider what it may mean, as crazy as it sounds, to solve an infinite (dense) system of equations; powers of patchixes and what they represent; eigenpatchixvalues and eigenfunctionvectors; group theoretical concepts such as symmetry groups the patchixes may give rise to, etc.

As much as that is extremely interesting to me, and I plan on continuing with my own investigations, my previous post and informal paper considered the implications of multiplying functions by functions, functions by patchixes, and patchixes by patchixes.  Actually I considered special kinds of patchixes  p(x,y) , those having the property that for any specific value  y_c \in [0,1] , then  \int_0^1 p(x,y_c) dx = 1 .  Such special patchixes I dubbed patches (pasqualian special matrixes), and I went on to attempt an extension of a Markov matrix and its concept into a Continuous Markov Patch, along with the logical extension of the Chapman-Kolmogorov equation by first defining patch (discrete) powers (this basically means "patchix multiplying" a patch with itself).  The post can be found here.

So today what I want to do is continue the characterization of patches that I started.  First of all, emulating some properties of the Markov treatment, I want to show how we can multiply a probability distribution (function) "vector" by a patch to obtain another probability distribution function vector. Now this probability distribution is special, in the sense that it doesn't live in all of  \mathbb{R} but in  [0,1] .  A beta distribution, such as  B(2,2) = 6(x)(1-x) , is the type that I'm specifically thinking about. So suppose we have a function  b(x) , which we must convert first to  b(1-y) in preparation to multiply by the patch.  Suppose then the patch is  p(x,y) with the property that, for any specific  y_c , then  \int_0^1 p(x,y_c) dx = 1 .  Now, the "patchix multiplication" is done by

 \int_0^1 b(1-y) \cdot p(x,y) dy

and is a function of  x .  We can show that this is indeed a probability distribution function vector by taking the integral for every infinitesimal change in  x , and see if it adds up to one, like this:

 \int_0^1 \int_0^1 b(1-y) \cdot p(x,y) dy dx

If there is no issue with absolute convergence of the integrals, there is no issue with the order of integration by the Fubini theorem, so we have:

 \int_0^1 \int_0^1 b(1-y) \cdot p(x,y) dx dy = \int_0^1 b(1-y) \int_0^1 p(x,y) dx dy

Now for the inner integral,  p(x,y) adds up to 1 for any choice of  y , so the whole artifact it is in effect a uniform distribution in  [0,1] with value 1 (i.e., for any choice of  y \in [0,1] , the value of the integral is 1).  Thus we have, in effect,

 \int_0^1 b(1-y) \int_0^1 p(x,y) dx dy = \int_0^1 b(1-y) \cdot u(y) dy = \int_0^1 b(1-y) (1) dy

for any choice of  y in  [0,1] , and that last part we know is 1 by hypothesis.

Here's a specific example:  Let's declare  b(x) = 6(x)(1-x) and  p(x,y) = x + \frac{1}{2} .  Of course, as required,  \int_0^1 p(x,y) dx = \int_0^1 x + \frac{1}{2} dx = (\frac{x^2}{2} + \frac{x}{2}) \vert^1_0 = 1 .  So then  b(1-y) = 6(1-y)(y) , and by "patchix multiplication"

 \int_0^1 b(1-y) \cdot p(x,y) dy = \int_0^1 6(1-y)(y) \cdot \left(x + \frac{1}{2} \right) dy = x + \frac{1}{2}

Thus, via this particular patch, the function of  b(x) = 6(x)(1-x) \rightsquigarrow c(x) = x + \frac{1}{2} , point by point.  Which brings me to my next point.

If  p(x,y) is really solely a function of  x , then it follows that  b(x) \rightsquigarrow p(x) any initial probability distribution becomes the patch function distribution (from the viewpoint of a single dimension, than two).  Here's why:

 \int_0^1 b(1-y) \cdot p(x,y) dy = \int_0^1 b(1-y) \cdot p(x) dy = p(x) \int_0^1 b(1-y) dy = p(x)

I think, of course, a lot more interesting are patches that are in fact functions of both  x and of  y .  There arises a problem in constructing them.  For example, let's assume that we can split  p(x,y) = f(x) + g(y) .  Forcing our requirement that  \int_0^1 p(x,y) dx = 1 for any  y \in [0,1] means:

 \int_0^1 p(x,y) dx = \int_0^1 f(x) dx + g(y) \int_0^1 dx = \int_0^1 f(x) dx + g(y) = 1

which implies certainly that   g(y) = 1 - \int_0^1 f(x) dx is a constant since the integral is a constant.  Thus it follows that  p(x,y) = p(x) is a function of  x alone.  Then we may try  p(x,y) = f(x) \cdot g(y) .  Forcing our requirement again,

 \int_0^1 p(x,y) dx = \int_0^1 f(x) \cdot g(y) dx = g(y) \int_0^1 f(x) dx = 1

means that  g(y) = \frac{1}{\int_0^1 f(x) dx} , again, a constant, and  p(x,y) = p(x) once more.  Clearly the function interactions should be more complex, let's say something like:  p(x,y) = f_1(x) \cdot g_1(y) + f_2(x) \cdot g_2(y) .

 \int_0^1 p(x,y) dx = g_1(y) \int_0^1 f_1(x) dx + g_2(y) \int_0^1 f_2(x) dx = 1

so that, determining three of the functions determines the last one, say

 g_2(y) = \frac{1-g_1(y) \int_0^1 f_1(x) dx}{\int_0^1 f_2(x) dx} is in fact, a function of  y .

Let's construct a patch in this manner and see its effect on a  B(2,2) .  Let  f_1(x) = x^2 , and  g_1(y) = y^3 , and  f_2(x) = x , so that

 g_2(y) = \frac{1 - g_1(y) \int_0^1 f_1(x) dx}{\int_0^1 f_2(x) dx} = \frac{1 - y^3 \int_0^1 x^2 dx}{\int_0^1 x dx} = \frac{1 - \frac{y^3}{3}}{\frac{1}{2}} = 2 - \frac{2y^3}{3}

and  p(x,y) = x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) .

So now the "patchix product" is

 \int_0^1 6(1-y)(y) \cdot \left(x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) \right) dy = \frac{x^2}{5} + \frac{28x}{15} which is a probability distribution on the interval  [0,1] and, as a matter of check, we can integrate with respect to  x to obtain 1.  Thus the probability distribution function  6(x)(1-x) is carried, point by point, as  6(x)(1-x) \rightsquigarrow \frac{x^2}{5} + \frac{28x}{15} which, quite frankly, is very amusing to me!

From an analytical point of view, it may be interesting or useful to see what happens to the uniform distribution on  [0,1] when it's "patchix multiplied" by the patch above.  We would have:

 \int_0^1 u(y) \cdot \left(x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) \right) dy = \int_0^1 (1) \cdot \left(x^2 y^3 + x \left(2 - \frac{2y^3}{3} \right) \right) dy = \frac{x^2}{4} + \frac{11x}{12}

so that  u(x) \rightsquigarrow \frac{x^2}{4} + \frac{11x}{12} .

In my next post, I want to talk about more in detail about "patchix multiplication" of, not a probability distribution on [0,1] vectors by a patch, but of a patch by a patch, which is the basis of (self) patch powers: with this I want to begin a discussion on how we can map oscillations and movement in a different way, so that perhaps we can trace my cereal milk movement in time.

1.2 Exercise 6

January 26th, 2009 No comments

This was a really easy problem, that ends the section on functions in Munkres's text.  The next fifteen problems deal with relations, and I am finding them immensely interesting.

"Let  f : \mathbb R \rightarrow \mathbb R be the function  f(x) = x^3 - x .  By restricting the domain and range of  f appropriately, obtain from  f a bijective function  g .  Draw the graphs of  g and  g^{-1} . (There are several possible choices for  g .) "

(Taken from Topology by James R. Munkres, Second Edition, Prentice Hall, NJ, 2000. Page 21.)

----------

SOLUTION  

As an example,  g: (-\infty, -1) \rightarrow \mathbb{R}^{-} with rule  g(x) = x^3 -x is bijective.

1.2 Exercise 5

January 23rd, 2009 No comments

After working on these problems all week, I'm not sure I can keep up 1 problem a day.  I'll try to post as many as possible in the span of a week, however.  I'm definitely taking a break over the weekend!

I really like this problem because it was an eye-opener when I first encountered it on the subtleties of inverse mappings.  I've seen it several places, it is a classic exercise on functions.  Here's the problem as taken from Munkres's text:

"In general, let us denote the identity function for a set  C by  i_C .  That is, define  i_C : C \rightarrow C to be the function given by the rule  i_C(x) = x for all  x \in C .  Given  f : A \rightarrow B , we say that a function  g : B \rightarrow A is a left inverse for  f if  g \circ f = i_A ; and we say that  h : B \rightarrow A is a right inverse for  f if  f \circ h = i_B .  

(a) Show that if  f has a left inverse,  f is injective; and if  f has a right inverse,  f is surjective. 

(b) Give an example of a function that has a left inverse but no right inverse. 

(c) Give an example of a function that has a right inverse but no left inverse. 

(d) Can a function have more than one left inverse? More than one right inverse? 

(e) Show that if  f has both a left inverse  g and a right inverse  h , then  f is bijective and  g=h=f^{-1} ."

(Taken from Topology by James R. Munkres, Second Edition, Prentice Hall, NJ, 2000. Page 21.)

-----------

SOLUTION  

(a)  

 f not injective means that  f(a) = f(a') for  a \neq a' .  The left inverse exists by hypothesis, and so when we apply it to  f(a) , by definition  (g \circ f)(a) = a , and on  f(a') it is  (g \circ f)(a') = a' .  But, being a function, the left inverse must map  f(a) = f(a') = b \in B to the same single element in  A , which would mean  a = a' . We've reached a contradiction in the argument, so  f is injective.

Suppose  f is not surjective.  This means there exists a subset of elements  B_{*} \subset B that do not map back to themselves via  f , but then this contradicts the fact that all elements of  B come back to themselves after applying  h and then  f . Thus,  f must therefore be surjective. 

(b)  

Many functions are injective but not surjective, say  f: \mathbb{R}^{+} \rightarrow \mathbb{R} with rule  \{(x,x) \vert x \in \mathbb{R}\} .  

(c)  

Likewise we can find many functions that are surjective but not injective, say  f : [0, \pi] \rightarrow [0,1] with rule  \{ (x, sin(x)) \vert x \in \mathbb{R}\}  

(d)

More than one left inverse: yes.  Here's an example: let  f : \{0,1\} \rightarrow \{0, 1, 2\} with rule  \{ (0,1) ; (1,2) \} .  Define  g : \{0,1, 2\} \rightarrow \{0, 1 \} by  \{ (0,0) ; (1,0) ; (2,1) \} .  Another could be  g': \{0,1, 2\} \rightarrow \{0, 1 \} by  \{(0,1) ; (1,0) ; (2,1)\} .  The important thing to notice is that every element of the domain of  f is returned to itself after applying  f and then  g or  g' .

More than one right inverse: yes.  Consider  f : \{0,1\} \rightarrow \{0\} with rule  \{ (0,0) ; (1,0) \} , and define  h: \{0\} \rightarrow \{0,1\} with rule  \{ (0,0) \} or  h': \{0\} \rightarrow \{0,1\} with rule  \{ (0,1) \} .  Notice that starting from the domain  B , applying  h or  h' on zero, then  f , returns us to zero: the composition of either function does not move the element in its path to  A and back.  

(e)  

Since  f has both a left inverse and a right inverse, it is both injective and surjective, and hence bijective.

To show that  g = h , pick any element  b \in B and apply  h  h is defined on all  B (in fact on the image of  f by surjectivity, and specifically never outside it).  Furthermore, since  f is injective, there is only one way to reach the element  h(b) = a \in A .  Next pick the same element  b \in B and apply  g .  Since  f is surjective, there is no risk of picking a  b that is defined to map back to  A in the same way as another  b' -not-in-the-image-of- f under  g . There is only one way  g can map that  b into  A by the injectivity of  f , so  g(b) = a . Since  h and  g both map to the same element in  A for all  b \in B ,  g = h .  Call this the inverse of  f , and denote it by  f^{-1} .  (Note this is different than the preimage of f which uses the same symbol).

Alternatively, by the identity property of equality,  g \circ f \circ h = g \circ f \circ h .  By associativity of composition of functions,  (g \circ f) \circ h = g \circ (f \circ h) .  This in turn reduces to  (i_A) \circ h = g \circ (i_B) , and so  h = g .