PDE’s and Frobenius Theorem

I’ve started many blog posts on algebra/algebraic geometry, but they won’t get finished and posted for a little while. I’ve been studying for a test I have to take in a few weeks in differential geometry-esque things. So I’ll do a few posts on things that I think are usually considered pretty easy and obvious to most people, but are just things I never sat down and figured out. Hopefully this set of posts will help others who are confused as I recently was.

My first topic is about the Frobenius Theorem. I’ve posted about it before. Here’s the general idea of it: If {M} is a smooth manifold and {D} is a smooth distribution on it, then {D} is involutive if and only if it is completely integrable (i.e. there is are local flat charts for the distribution).

What does this have to do with being able to solve partial differential equations? I’ve always heard that it does, but other than the symbol {\displaystyle\frac{\partial}{\partial x}} appearing in the defining of a distribution or of the flat chart, I’ve never figured it out.

Let’s go through this with some examples. Are there any non-constant solutions {f\in C^\infty (\mathbb{R}^3)} to the systems of equations: {\displaystyle \frac{\partial f}{\partial x}-y\frac{\partial f}{\partial z}=0} and {\displaystyle \frac{\partial f}{\partial y}+x\frac{\partial f}{\partial z}=0}?

Until a few days ago, I would have never thought we could use the Frobenius Theorem to do this. Suppose {f} were such a solution. Define the vector fields {\displaystyle X=\frac{\partial}{\partial x}-y\frac{\partial}{\partial z}} and {\displaystyle Y=\frac{\partial}{\partial y}+x\frac{\partial}{\partial z}} and define the distribution {D_p=\text{span} \{X_p, Y_p\}}.

Choose a regular value of {f}, say {C} (one exists by say Sard’s Theorem). Then {f=C} is a 2-dimensional submanifold {M\subset \mathbb{R}^3}, and since {f} is a defining function {T_pM=ker(Df_p)}. But the very fact that {f} satisfies, by assumption, {X(f)=0} and {Y(f)=0}, we have {T_pM=\text{span} \{X_p, Y_p\}}. I.e. {M} is an integral manifold for the distribution {D}. Thus {D} must be involutive.

Just check now. {\displaystyle [X,Y]=2\frac{\partial}{\partial z}}, so in particular at the origin {\displaystyle X_0=\frac{\partial}{\partial x}} and {\displaystyle Y_0=\frac{\partial}{\partial y}} it is not in the span, and hence not involutive. Thus no such {f} exists. This didn’t even use Frobenius.

Now let’s spice up the language and difficulty. Is it possible to find a function {z=f(x,y)}, {C^\infty} in a neighborhood of {(0,0)}, such that {f(0,0)=0} and {\displaystyle df=(ye^{-(x+y)}-f)dx+(xe^{-(x+y)}-f)dy}? Alright, the {d} phrasing is just asking there is a local solution to the system {\displaystyle \frac{\partial f}{\partial x}=ye^{-(x+y)}-f} and {\displaystyle \frac{\partial f}{\partial y}=x^{-(x+y)}-f}. Uh oh. The above method fails us now since it isn’t homogeneous.

Alright, so let’s extrapolate a little. We have a system of the form {\displaystyle \frac{\partial f}{\partial x}=\alpha(x,y,f)} and {\displaystyle \frac{\partial f}{\partial y}=\beta(x,y,f)}. The claim is that necessary and sufficient conditions to have a local solution to this system is {\displaystyle \frac{\partial \alpha}{\partial y}+\beta\frac{\partial \alpha}{\partial z}=\frac{\partial \beta}{\partial x}+\alpha \frac{\partial \beta}{\partial z}}.

I won’t go through the details of the proof, but the main idea is not bad. Define the distribution spanned by {\displaystyle X=\frac{\partial}{\partial x}+\alpha\frac{\partial}{\partial z}} and {\displaystyle Y=\frac{\partial}{\partial y}+\beta\frac{\partial}{\partial z}}.

Then use that assumption to see that {[X,Y]=0} and hence the distribution is involutive and hence there is an integral manifold for the distribution by the Frobenius Theorem. If {g} is a local defining function to that integral manifold, then we can hit that with the Implicit Function Theorem and get that {z=f(x,y)} (the implicit function) is a local solution.

If we go back to that original problem, we can easily check that the sufficient condition is met and hence that local solution exists.

I had one other neat little problem, but it doesn’t really fit in here other than the fact that solutions to PDEs are involved.

Advertisements

Harmonic Growth as Related to Complex Analytic Growth

Let’s change gears a bit. This post will be on something I haven’t talked about in probably a year…that’s right, analysis. Since the last post was short, I’ll do another quick one. The past few days have had varying efforts to solve a problem of the form if f is an analytic function and we know that |Ref(z)|\leq M|z|^k (for large |z| say), do we actually know something like |f(z)|\leq M|z|^k?

Let’s rephrase this a bit. Essentially we’re talking about growth. It would be sufficient to show something along the lines of: if u is harmonic, and grows at some rate, then v the harmonic conjugate also must grow at a related rate. But all of this growth talk is vague. What does this even mean?

One measure of growth would be |\nabla u|=\sqrt{\left(\frac{\partial u}{\partial x}\right)^2+\left(\frac{\partial u}{\partial y}\right)^2}. In fact, gradient points in the direction of greater change, so this is in some sense an upper bound on the growth. Another is f'(z). Does this help? Well, first off, if this is our notion of growth, then by the Cauchy-Riemann equations, we immediately get that the harmonic conjugate grows exactly the same: |\nabla u|=|\nabla v|. Let’s check how useful this is in recovering growth of f.

Since I haven’t talked about complex analysis much, note that the derivative operator for complex functions is \frac{1}{2}\left(\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}\right).

Now f'(z)=\frac{1}{2}\left(\frac{\partial(u+iv)}{\partial x}-i\frac{\partial(u+iv)}{\partial y}\right)
= \frac{1}{2}\left(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+i\left(\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}\right)\right)
= \frac{\partial u}{\partial x}-i\frac{\partial u}{\partial y} by Cauchy-Riemann

Thus |f'(z)|=|\nabla u|.

Did this solve our original problem? Yes. Since if we work out the partial derivatives we get that if $|u|\leq M(x^2+y^2)^k/2$, then |\nabla u(z)|\leq Mk|z|^{k-1}.

In particular, |f'(z)|\leq Mk|z|^{k-1}. So we wanted to show that f was a polynomial of degree at most k, and we can now use Cauchy estimates to get that.

If any of what I just wrote is true, then there is some really obvious way of doing it that isn’t messy like this at all. I mean, the result is |f'(z)|=|\nabla u|. Is this for real? Am I horribly mistaken? I can’t find this in any book…

Zeros of Analytic Functions

A strange property of analytic functions is that the zeros are isolated. I don’t remember the proof I originally learned of this fact, but today I saw a really interesting topological way to do it. It makes sense now.

More precise formulation: If \Omega\subset\mathbb{C} is a connected open set, then \{z: f(z)=0\} consists of isolated points if f is analytic on \Omega. (Oops, I started writing this up and realized that I need to trivially throw out the case where f\equiv 0.

Proof: Let U_1=\{a\in\Omega : \exists\delta>0, \ f(z)\neq 0 \ on \ 0<|z-a|<\delta\} and let U_2=\{a\in\Omega : \exists\delta>0, \ f(z)\equiv 0 \ on \ 0<|z-a|<\delta \}. Reformulating the setup we see that U_1 means: if f has a zero, it is isolated since f is nonzero on a punctured disk (meaning the zero must be the punctured part). Also U_2 is just the regions that f has non-isolated zeros.

It is straightforward to check that both U_1 and U_2 are open (just choose \delta‘s sufficiently small to stay inside the declared sets). Also we have that U_1\cap U_2=\emptyset and I now claim \Omega=U_1\cup U_2.

This seems obvious, but should be pinned down in some sort of argument. Let z_0\in\Omega. We claim that there is a punctured disk about z_0 such that either f\equiv 0 on the disk or f\neq 0 anywhere on the disk. By analyticity, we have a power series convergent on some radius r>0 about z_0, i.e. f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n on |z-z_0|<r.

Suppose a_k is the first nonzero coefficient (by not being equivalently zero, this must exist). Then f(z)=\sum_{n=0}^\infty a_{n+k}(z-z_0)^n=(z-z_0)^{-k}\sum_{n=k}a_n(z-z_0)^n. So since the series converges in 0<|z-z_0|<r and since f is continuous we can choose 0<\delta<r small enough so that |f(z)-f(z_0)|=|f(z)-a_k|<\frac{|a_k|}{2}. This clearly shows that f(z)\neq 0 on 0<|z-z_0|<\delta else we’d have |a_k|<\frac{|a_k|}{2}. So either there is a punctured disk on which f is non-zero, or the f has no first non-zero coefficient making it zero everywhere on that first disk |z-z_0|<r proving the claim.

The properties U_1\cap U_2=\empty and \Omega=U_1\cup U_2 (along with both sets being open) combine to give that either U_1=\emptyset or U_2=\emptyset by the connectedness of \Omega. This simply means that all the zeros are isolated since we ruled out the alternative of being equivalently zero.

This goes to show how remarkably different analytic on \mathbb{C} is to continuous on \mathbb{R}. In fact, even infinitely differentiable functions on \mathbb{R}. Bump functions play a crucial role in many areas of analysis and they are smooth functions with compact support meaning that outside of a bounded they are zero. An entire class of important functions violates this property that analytic functions are guaranteed to have.

Banach Algebra Homomorphism

I’m in no mood to do something challenging after this last ditch effort to learn analysis before my prelim, so I’ll do something nice (functional analytic like I promised) that never ceases to amaze me.

Theorem: If \phi is a complex homomorphism on a Banach algebra A, then the norm of \phi, as a linear functional, is at most 1.

Recall that a Banach algebra is just a Banach space (complete normed linear space) with a multiplication that satisfies \|xy\|\leq\|x\|\|y\|, associativity, distributivity, and (\alpha x)y=x(\alpha y)=\alpha (xy) for any scalar \alpha.

Complex homomorphisms are just linear functionals that preserve multiplication \phi(\alpha x+\beta y)=\alpha\phi(x)+\beta\phi(y) and \phi(xy)=\phi(x)\phi(y).

Assume not, i.e. there exists x_0\in A such that |\phi(x_0)|>\|x_0\|. To simplify notation, let \displaystyle \lambda=\phi(x_0) and let \displaystyle x=\frac{x_0}{\lambda}. Then \displaystyle \|x\|=\frac{\|x_0\|}{\lambda}<1 and \displaystyle\phi(x)=\phi(\frac{x_0}{\lambda})=1.

Now \|x^n\|\leq\|x\|^n so s_n=-x-x^2-\cdots-x^n \in A form a Cauchy sequence. Now A is a Banach space and hence complete, so there exists y\in A such that \|y-s_n\|\to 0. But now factor to see that x+s_n=xs_{n-1} and take the limit to get x+y=xy. Now take the homomorphism of both sides, and we have a contradiction \phi(x)+\phi(y)=\phi(x)\phi(y) (in particular 1+\phi(y)=\phi(y)).

So some reasons why this may not be all that shocking: we require these to be complex, and complex things tend to work out nicer than real. Also, these are pretty stringent conditions on what constitutes a Banach algebra and what constitutes a homomorphism. We should be able to get some nice structure with all the tools available. It isn’t like we got a lot. Really we’re just saying that these things are bounded and hence continuous, which isn’t all that surprising.

OK. I’ll stop down playing it. It does surprise me.

Applying Covering Theorems

I’ve searched far and wide to not do one of the standard applications that are in all grad analysis texts (yes I’m referring to the Hardy-Littlewood maximal function being weakly bounded). We are getting into the parts of analysis that I despise (it will all be over in 4 days…I hope *crosses fingers*).

Claim: If f: [0,1]\to\mathbb{R} is an increasing function and we define \displaystyle g(x)=\limsup_{h\to 0}\frac{f(x+h)-f(x)}{h} (so almost a Dini derivative), then the outer measure m^*\{x: g(x)>1\}\leq f(1)-f(0). I know, it is rather similar to the maximal function theorem, but it is hard to find something that utilizes these tools that doesn’t have the same flavor.

Proof: Call S=\{x: g(x)>1\}. Then for any x\in S we can find an h>0 as small as we like so that f(x+h)-f(x)>h (If some h were smallest, then the limsup wouldn’t be >1). Now we just cover [0,1] with these intervals [x+h, x]. It is a Vitali covering, since we’ve already checked that the diameter of the intervals (the h’s) can be made arbitrarily small.

Let \epsilon>0. Now by our Theorem we can choose a finite disjoint collection of them say \{[x_n+h_n, x_n]\}_1^N such that m(\cup [x_n+h_n, x_n])>m^*(S)-\epsilon (just the definition of how outer measure relates to measure).

Now: f(1)-f(0)\geq\sum_{n=1}^N(f(x_n+h_n)-f(x_n))

>\sum_{n=1}^N h_n

= m(\cup [x_n+h_n, x_n])

> m^*(S) - \epsilon.

Since \epsilon was arbitrary m^*(S)\leq f(1)-f(0).

Some notes: Often times you don’t need the full Vitali Covering Theorem (it may have been overkill here even, but heck I wanted to use it). Also, the setup for these things is almost always in this standard form. If you see m(\{x: F(x)>blah\})\leq blahblah, then you are bound to have to use one of these Lemmas or Theorems.

I never really understood the huge fuss over maximal functions, but here is the definition: Given f\in L^1, we define \displaystyle Mf(x)=\sup_{0<r<\infty}\frac{1}{m(B(x,r))}\int_{B(x,r)}|f(y)|dm(y). We get the inequality that m\{x: Mf(x)>\lambda\}\leq \lambda^{-1}3^k\|f\|_1 where we are working in \mathbb{R}^k. So this basically says that M: L^1\to \text{weak} L^1. Also note that when you see things like 3^k or 5^k we probably won’t need the full theorem as those constants appear in the Lemmas. While we’re at it, we only get weak L^1, but if we are working in L^p for p>1, then M: L^p\to L^p (we use a much different technique from the Fourier Transform, though.

Ack! Moving on next time. Maybe functional analysis type stuff?

Edit: Does anyone else have issues LaTeXing square brackets?! They seem to work when they are in the middle of stuff, but never parse when I just want something like [0,1].

Covering Theorem (we use past Lemmas)

A brief break occured while I moved 2700 miles away. The important thing is I’m back, and we’re going to prove a big one today. First let’s define a Vitali covering. A set is Vitali covered by the collection of sets \mathcal{V} if for any \epsilon>0 and any x in the set, there exists a set V\in\mathcal{V} such that x\in V and diam(V)<\epsilon. So note that this is sort of stringent in that we always have to find one of the covering sets to be of arbitrarily small diameter at any point.

Vitali’s Covering Theorem (easy version): If \mathcal{I} is a sequence of intervals that Vitali covers an interval E\subset \mathbb{R}, then there is a countable disjoint subcollection of \mathcal{I} that covers E except for a set of Lebesgue measure 0. (Note I call this the easy version because it can be extended to finite dimensional space using balls and such, but this is easier to prove).

Proof: Suppose the hypothesis with same notation. The claim is that we can find I_n\in\mathcal{I} disjoint such that m\left(E\setminus\cup I_n\right)=0. All interval types have same measure, so WLOG assume the intervals are closed in \mathcal{I}. Define \mathcal{I}^* to be the collection of finite unions of disjoint intervals from \mathcal{I}.

Claim: If A\in\mathcal{I}^* and m(E\setminus A)>0 then there is a B\in\mathcal{I}^* such that A\cap B=\emptyset and m(E\setminus(A\cup B))<\frac{3}{4}m(E\setminus A).

Proof of claim: Since \mathcal{I} is a Vitali covering we can choose intervals of small enough diameter \{J_i\}_1^n\subset\mathcal{I} so that each J_i\subset E\setminus A (since it has positive measure there will be at least one of these). Since we don’t care about overlap right now, we can do this at enough points so that m(E\setminus(A\cup J_1\cup\cdots\cup J_n))<\frac{1}{12}m(E\setminus A). Now by the Vitali Covering Lemma of last time we can find a disjoint subset \{I_j\}_1^k\subset \{J_i\}_1^n so that m(\bigcup I_j)\geq \frac{1}{3}m(\bigcup J_i).

Then m(E\setminus (A\cup I_1\cup\cdots\cup I_k))

<\frac{2}{3}m(J_1\cup\cdots\cup J_n)+\frac{1}{12}m(E\setminus A)

\leq \frac{2}{3}m(E\setminus A)+\frac{1}{12}m(E\setminus A)

=\frac{3}{4}m(E\setminus A).

Thus B=I_1\cup\cdots\cup I_k\in\mathcal{I}^* works.

Now simply apply this inductively and use countable additivity of measure to get that m(E\setminus A_1\cup\cdots \cup A_k)\leq \left(\frac{3}{4}\right)^km(E), i.e. \displaystyle m\left(E\setminus \bigcup_{k=1}^\infty A_k\right)=0. We are done.

The generalization is exactly the same, except where you use Vitali Covering Lemma, you replace 3 with 3^n (notice that this relies on n being finite). It is not true in infinite dimensional spaces. Also, you can reformulate the the statement to use Hausdorff measure.

Covering Lemma 2

Today I’ll do probably the best known Vitali Covering Lemma. I’ll take the approach of Rudin.

Statement (finite version): If W is the union of a finite collection of balls B(x_i, r_i) (say to N), then there is a subcollection S\subset \{1,\ldots , N\} so that

a) the balls B(x_i, r_i) with i\in S are disjoint.

b) W\subset \bigcup_{i\in S} B(x_i, 3r_i), and

c) m(W)\leq 3^k\sum_{i\in S} m(B(x_i, r_i)). Hmm…I guess I should say W\subset\mathbb{R}^k from the looks of it.

Proof: Quite simple in this case. Just order the radii in decreasing order (finite so we can list them all), r_1\geq r_2\geq \cdots \geq r_N. So now just take a subsequence of that \{r_{i_k}\} where i_1=1 and then go down the line until you get to the first ball that doesn’t intersect B_{i_1}. Now choose B_{i_3} as the next one that doesn’t intersect either of the ones before it. Continue in this process to completion. (a) is done since we’ve picked a disjoint subset of the original (this is a trivial condition, though, since we could ignore (b) and (c) and just choose a single element).

Now for (b), look at any of the skipped over B_j, then we claim it was a subset of B(x_i, r_i) for some i that we picked. This is clear when we note order. If we skipped some r’, then there was an early one we didn’t skip, so r'\leq r and since we skipped it, B(x', r') intersects B_r. If we double the radius of r, then it will go cover at least half of B_r' and if we triple it, it will cover all of B_r'. So for any skipped ball we have B(x', r')\subset B(x, 3r) giving us (b).

For (c), we just use the standard property of Lebesgue measure that m(B(x, 3r))=3^km(B(x,r)). Sum over the set we created and we are done.

Infinite Case: Let \{B_j : j\in J\} be any collection of balls in \mathbb{R}^k such that \sup_j diam(B_j)<\infty. Then there exists a disjoint subcollection (J'\subset J) such that \bigcup_{j\in J}B(x_j, r_j) \subset \bigcup_{j\in J'}B(x_j, 5r_j).

Proof: Let R be the sup of the radius of the balls (which we’ve forced to be finite). Now we define subcollections. Let Z_i be the subcollection of balls with radius in \left(\frac{R}{2^{i+1}}, 2^iR \right]. Now take the maximal disjoint subcollection Z_0' of Z_0, etc. (maximal subcollection of Z_i' of Z_i disjoint from Z_{i-1}'…). This collection now satisfies the requirements.

Next time I’ll do Vitali’s Covering Theorem. I’m debating whether to prove it or not. Applications of it might be more interesting.

Covering Lemma 1

Internet was a bit weird lately, and I didn’t want to lose a post half-way through, so I decided to wait in writing this. I seem to have a weakness when it comes to figuring out how metric properties and measure properties interplay. It is almost inevitable that you will need to call upon some “covering lemma” to get the job done. These are used extensively in differentiation, but also less-known in generally defining measures that interact nicely if your space already has a metric.

Lebesgue Number: Given a compact metric space (X, d), then for any open cover, there exists \delta>0 such that if a set has diameter less than \delta it is contained in one of the members of the cover.

Proof: Really you just do all the stadard tricks, and then tie them together. So let \{U_i\}_{i=1}^n be a finite subcover of our cover. (We ignored the whole space being a member for trivial reasons). Now define C_i=X\setminus U_i and a function f: X\to \mathbb{R} that basically averages the distances to each C_i. This gives us f(x)=\frac{1}{n}\sum_{i=1}^nd(x,C_i). Now for any x, we know that x\in U_i for some i, and this set is open so there is an \epsilon>0 small enough so that B_\epsilon (x)\subset U_i, thus d(x, C_i)\geq\epsilon yielding f(x)\geq \frac{\epsilon}{n}.

Since f is continuous and X compact, f achieves its min. We call this \delta. Now if E is a set with diameter less than \delta, then we can pick any x_0\in E and get that B_\delta (x_0)\supset E. Now \delta\leq f(x_0)\leq d(x_0, C_k) where C_k is chosen as the maximal distance to any of the C’s. But then we have it since U_k=X\setminus C_k\supset E. Thus any set of diameter less than delta is contained in a member of the covering.

There are some interesting other ways of proving this. This is I think considered the “standard” since Munkres does it this way.

OK. So I guess I’ll do a series of these. I was going to do two in this post, but I don’t feel like doing the other now that I just typed that out. Also, if I do a series I can do the three that I feel are most important instead of just the two easier ones.

Not Compact Unit Ball

Here is a beautiful little theorem. The unit ball in an infinite dimensional Hilbert space is not compact. The proof is quite simple. So the unit ball B=\{v\in \mathcal{H} : \|v\|\leq 1\}. Recall that since this is a Hilbert space, we have an inner product defining this norm \langle v, v \rangle=\|v\|^2.

Since our space is infinite dimensional, we can choose \{v_1, v_2, \ldots \} to be linearly independent inductively. Basic application of Gram-Schmidt gets us this set to be orthonormal, in particular, each one has norm 1 and so is in the unit ball.

Now look at the distance between any two \|v_i - v_j\|^2=\langle v_i - v_j, v_i - v_j \rangle

= \langle v_i, v_i \rangle -2\langle v_j, v_i \rangle +\langle v_j, v_j\rangle

=1+2(0)+1

=2.

Thus all are \sqrt{2} apart and hence there is no subsequence that converges. Since sequential compact and compact are the same here, we are done. (Note the inner product is not symmetric, but I knew the middle terms would be zero, so I went ahead and abused that instead of writing two zeros).

I know the result to be true in a Banach space as well, but I don’t see a quick fix without the inner product…

A lot more of these things will start popping up with my analysis prelim two weeks from today.

Measure Decomposition Theorems

Well, I’ve been mostly posting comments around on other people’s blogs and not really getting around to my own. I’m giving up on NCG for now. It seems that the stuff I already know I’m reading, and I’m skipping the stuff that will take effort to sort through. This seems pointless, especially with Analysis prelims coming up. That’s why things may take a turn in that direction over the next couple of weeks. I do still have at least one major ethical issue I want to sort out, though.

So. What is the most confusing part of measure theory? To me it is the fact that there are tons of ways to decompose your measure. In fact, I usually can’t remember which one is named what, and when to use which one. This post is an attempt to sort out which one is which, and what to look out for when you want to use them.

Jordan Decomposition: Any real measure \mu on a \sigma-algebra can be expressed in terms of two positive measures, called the positive and negative variations (\mu^+ and \mu^-) by \mu=\mu^+-\mu^-. This allows us to examine the total variation more easily, since |\mu|=\mu^+ + \mu^-. Also, it is quite simple to prove the existence and uniqueness since we can write \mu^+=\frac{1}{2}(|\mu| + \mu) and \mu^-=\frac{1}{2}(|\mu|-\mu).

Jordan decomposition seems to be used when you can prove something for positive measures and need to extend it to all measures. Since J decomp gets you any measure in terms of positive measures, this eases the process. The other main use is when you invoke the uniqueness along with the next decomp theorem.

Hahn Decomposition: This is different from all the rest. It is not a decomposition of the measure, but of the measure space. It says: Let \mu be a real measure on a \sigma-algebra \mathfrak{M} in a set X. Then there exist sets A and B in \mathfrak{M} such that X=A\cup B, A\cap B=\emptyset and such that the positive and negative variations \mu^+, \mu^- satisfy \mu^+(E)=\mu(A\cap E) and \mu^-(E)=-\mu(B\cap E), for any E\in\mathfrak{M}.

Things to note. This is not unique! Also, you get as a quick corollary that since the positive and negative variations are concentrated on disjoint sets, they are mutually singular. The Hahn decomp is usually invoked in conjunction with the J decomp, as in, “Let ____ be the J decomposition and A, B be the respective Hahn decomp.” These two together get you that the J decomp is minimal. In other words, if \mu=\lambda_1 - \lambda_2, where \lambda_1 and \lambda_2 are positive measures, then \lambda_1\geq \mu^+ and \lambda_2\geq\mu^-.

Lebesgue Decomposition: Let \mu be a positive \sigma-finite measure and let \lambda be a complex measure on the same sigma algebra, then there is a unique pair of complex measures \lambda_a and \lambda_s such that \lambda=\lambda_a + \lambda_s and \lambda_a \ll \mu and \lambda_s \perp \mu. Also, if \lambda is positive and finite, then so are the two parts of the decomp. Caution: the measure \mu MUST be sigma finite. This theorem says that given any complex measure and any sigma finite measure, you can decompose the complex one into two unique parts that are absolutely continuous with respect to and mutually singular with the sigma finite measure respectively.

The major use of this is when you want to invoke the Radon-Nikodym theorem to get an integral representation of your measure. The Radon-Nikodym theorem only works if your measure is absolutely continuous with respect to the other. Luckily, with Lebesgue decomposition you can always apply R-D to at least a part of the measure.

Polar Decomposition: Let \mu be a complex measure on a sigma algebra. Then there is a measurable function h such that |h(x)|=1 for all x and such that d\mu=hd|\mu|. Note that the name “polar” is in reference to the polar form of writing a complex number as the product of its absolutely value and a number of absolute value 1. I’m not entirely sure I’ve ever used this. I guess the main place that it seems useful is when working with the integral representation of the measure. If you need to manipulate with the total variation, then this gives you how to put it into the integral representation.

Those seem to be the big ones. This is quite possibly the most useful math post I’ve made. I didn’t go into too much depth, but hopefully if someone is struggling with the differences between these, or trying to get a vague idea of when to use them, this post will help. I suppose I could have elaborated a little by proving the simple claims and showing counterexamples for the “cautions.” This would have given a feel for using them. Oh well.