The Isoperimetric Inequality

This will be a cool application of the concepts introduced in the last post. We call a curve rectifiable if there exists finite M such that for any partition a=t_0<t_1<\cdots<t_N=b, \sum^N_{j=1}|z(t_j)-z(t_{j-1})\le M, i.e. the curve has some notion of “finite length,” where length is the supremum of this quantity taken over all partitions or equivalently the infimum over all upper bounds M. It follows by definition that if the x– and y-parametrizing functions of the curve are of bounded variation, the curve is rectifiable. It is not, however, true that the classical arc length formula holds for all curves of bounded variation: just consider a curve where the x and y functions are the Cantor-Lebesgue function. We get a straight line from the origin to (1,1), but the derivative of the function is zero almost everywhere.

Result 4: The arc length formula does work if we assume absolute continuity of x(t) and y(t).

Proof: We will prove that the total variation of a complex valued function over [a,b] is \int^b_a|F'(t)|dt, because then we can just substitute F(t) = x(t)+iy(t). As usual, we will prove that there is inequality in both directions. But recall that for absolutely continuous functions, the fundamental theorem of calculus holds, so pick a partition a = t_0<t_1<\cdots<t_N=b so that \sum^N_{j=1}|F(t_j)-F(t_{j-1}| = \sum^N_{j=1}\left|\int^{t_j}_{t_{j-1}}F'(t)\right|dt\le\int^b_a|F'(t)|dt. In the other direction, recall that step functions are dense in L^1, so we can find an approximating step function g to F' so that h = F'-g has integral arbitrarily small. By the triangle inequality, T_F(a,b)\ge T_G(a,b)-T_H(a,b)>T_G(a,b)-\epsilon, where G(x) = \int^x_a g(t)dt and H(x) = \int^x_ah(t)dt. We can bound T_G(a,b) by taking a partition where the adjacent intervals are over constant parts of the step function g so that T_G(a,b)\ge \sum^N_{j=1}\left|\int^{t_j}_{t_{j-1}}g(t)\right|dt=\int^b_a|g(t)|dt. But recall that we picked g to be extremely close to F' so that \int^b_a|g(t)|dt\ge\int^b_a|F'(t)dt|-\epsilon, so T_F(a,b)\ge\int^b_a|F'(t)|dt - 2\epsilon and we get inequality in the other direction.

Now before we proceed to state and prove the isoperimetric inequality, let’s get some vocabulary under our belt. Define the one-dimensional Minkowski content M(K) of a curve K to be \lim_{\delta\to 0}\frac{m(K^{\delta}}{2\delta}, where K^{\delta} denotes the set of points which are at most \delta away from any point in K. Define a simple curve to be a curve that doesn’t double over on itself, a quasi-simple curve to be a curve such that t\to z(t) is injective except for finitely many points, and a closed curve to be one that starts where it ends. As the name suggests, the Minkowski content of a curve turns out to be precisely its length if the curve is rectifiable and quasi-simple.

Continue reading

Advertisements

Integrating the Derivative- Filling in the Gaps

This will be a short continuation of the last post. Note that in our discussions in the previous post, we assumed throughout that F was continuous. Let’s drop that assumption and take care of the case of jump discontinuities. Fortunately there aren’t too many: between any jump discontinuity we can squeeze in a distinct rational number, so a bounded increasing function can have at most countably many discontinuities.

Define F(x^-) and F(x^+) to be F except defined on the left and right sides of each jump discontinuity respectively so that for F increasing, F(x^-)\le F(x)\le F(x^+). Define the jump function J(x) to be the sum of all jumps \alpha_n F(x^+_n)-F(x^-_n) at and to the left of the point, where x_n are points of discontinuity. In other words, we are constructing an increasing step functions where the discontinuities are exactly the same points as in the original function. Note that if F is bounded, the sum of these jumps and thus J is absolutely and uniformly convergent.

Why did we construct this? Well by design, F-J is continuous and increasing. Basically, the motivation graphically is to “drop down” all the discontinuities so that we get one continuous curve F-J, and then we add the jumps separately in the form of J. We can do this because it turns out:

Result 3: J'(x) exists and equals zero almsot everywhere.

Proof: Define E to be the set of x for fixed \epsilon for which the limit superior of the differential \frac{J(x+h)-J(x)}{h}>\epsilon. We want to show that m(E)=0. Because the series of jumps \sum\alpha_n converges, we can find N for any \nu such that the sum past the Nth jump is less than \nu. Then the jump function J_0 corresponding to all jumps past the Nth jump changes by less than \nu from J_0(a) to J_0(b). But J differs from J_0 by a finite number of summands, so the set E' for which the limit superior of \frac{J_0(x+h)-J(x)}{h} exceeds \epsilon differs from E by finitely many points. Pick a compact subset of E so that when we take out these points, the resulting compact subset K\subset E' is at least half the measure of E. For each x\in K, we have a neighborhood (a_x,b_x) where J_0(b_x)-J_0(a_x)>\epsilon(b_x-a_x). By compactness, we can pick a finite subcover, and then we can apply our old covering argument to get a disjoint sub-collection for which \sum^n_{j=1}m(I_n)\ge m(K)/3.

So we have \nu > J_0(b)-J_0(a)\ge \sum J_0(b_k)-J_0(a_k)>\epsilon\sum(b_k-a_k)\ge \frac{\epsilon m(E)}{6}, and because \nu can be anything, we have proven m(E) to be zero.

Integrating the Derivative

Dual to the problem addressed in the previous post is that of when the result of the fundamental theorem of calculus holds true, namely that F(b)-F(a) = \int^b_a F'(x)dx. It turns out the condition of absolute continuity defined at the end of the last post is sufficient. First, define the variation of a complex-valued function F over a partition a = t_0< \cdots < t_N=b to be \sum^N_{i=1}|F_i-F_{i-1}|. It is easy to see that variation increases in the “fineness” of the partition; we say that a function is of bounded variation if the supremum of the variation over all partitions, the total variation T_F(a,b), is finite. This will be the case, roughly speaking, for functions that do not oscillate too widely or too frequently. We can see that real, monotonic, bounded functions, as well as differentiable and bounded functions, are of bounded variation.

The first result of this post will show some of the motivation for studying these functions.

Result 1: Bounded variation implies differentiability almost everywhere.

Proof: To prove this, let’s first narrow our focus solely to increasing functions. We can do this because of the following characterization of functions of bounded variation: they are precisely the functions that are differences of two increasing bounded functions. One direction is obvious, that the difference of two increasing bounded functions is of bounded variation. To prove the other direction, define positive and negative variation P_F(a,b) and N_F(a,b) over the interval \sup\sum_{(+)}F(t_j)-F(t_{j-1}) and \sup\sum_{(-)}F(t_j)-F(t_{j-1}), where the sums are taken over all positive and negative differences, respectively. We immediately have that F(b)-F(a) = P_F(a,b) - N_F(a,b) and T_F(a,b) = P_F(a,x)+N_F(a,x). Then to prove the other direction of our claim, simply take the functions to be P_F(a,b)+F(a) and N_F(a,b).

Continue reading

Differentiating the Integral

Now let’s put the machinery we’ve built up to use in making precise the familiar notion of differentiation and integration being dual to each other. It is easy to see in one direction why this makes sense: roughly speaking, the derivative of an integral is the limit of the average value of a function over a neighborhood as the measure of that neighborhood approaches zero. For this reason the first problem we will address in this post is called the averaging problem, namely, if f is Lebesgue-integrable, do we have that \lim_{m(B)\to 0, x\in B}\frac{1}{m(B)}\int_Bf(y)dy = f(x) for almost all x?

Result 1: We can answer the averaging problem in the affirmative.

Well it’s certainly in the affirmative for continuous functions; one of the keys is to observe that continuous functions of compact support are dense in the space L^1 of Lebesgue-integrable functions. We already know simple functions are, so step functions are as well, and we can easily find arbitrarily close approximations to the most basic step function, namely the characteristic function of a rectangle, by continuous functions of compact support, so now for any f in L^1, approximate by such a continuous function g so that \left|\left|f-g\right|\right| can be made arbitrarily small. Then we can rewrite \frac{1}{m(B)}\int_Bf(y)dy - f(x) as

\frac{1}{m(B)}\int_B(f(y)-g(y))dy + \left(\frac{1}{m(B)}\int_Bg(y)dy - g(x)\right) +(g(x)-f(x)).

Take the limit superior of both sides over balls that contain x, and because g is continuous, the middle term on the right vanishes. We want to show that for any given \alpha, the measure of the set E_{\alpha} for which the limit superior of the left, i.e. the difference between f and the limit of its average value, exceeds \alpha is zero. By Chebyshev’s inequality, |g(x)-f(x)|>\alpha on a set of measure O(\left|\left|f-g\right|\right|), and \limsup\frac{1}{m(B)}\int_B(f(y)-g(y))dy\le (f-g)^*(x), where (f-g)^*=\sup\frac{1}{m(B)}\int_B|f(y)-g(y)|dy, the so-called Hardy-Littlewood maximal function.

Continue reading