A Mind for Madness

Musings on art, philosophy, mathematics, and physics


Leave a comment

Naturality of Flows

This is something I always forget exists and has a name, so I end up reproving it. Since this sequence of posts is a hodge-podge of things to help me take a differential geometry test, hopefully this will lodge the result in my brain and save me time if it comes up.

I’m not sure whether to call it a lemma or not, but the setup is you have a smooth map {F:M\rightarrow N} and a vector field on {M}, say {X} and a vector field on {N} say {Y} such that {X} and {Y} are {F}-related. Define {M_t} and {N_t} to be the image of flowing for time {t} and let {\theta} and {\eta} be the flows of {X} and {Y} respectively. Then the lemma says for all {t} we have {F(M_t)\subset N_t} and {\eta_t\circ F=F\circ \theta_t} on {M_t}.

This is a “naturality” condition because all it really says is that the following diagram commutes:

{\begin{matrix} M_t & \stackrel{F}{\longrightarrow} & N_t \\ \theta_t \downarrow & & \downarrow \eta_t \\ M_{-t} & \stackrel{\longrightarrow}{F} & N_{-t} \end{matrix}}

Proof: Let {p\in M}, then {F\circ \theta^p: \mathbb{R}\rightarrow N} is a curve that satisfies the property \displaystyle {\frac{d}{dt}\Big|_{t=t_0}(F\circ \theta^p)(t)=DF_{\theta^p(t_0)}(\frac{d}{dt}\theta^p (t)\Big|_{t=t_0})=DF_{\theta^p(t_0)}(X_{\theta^p(t_0)})=Y_{F\circ \theta^p(t_0)}}. Since {F\circ \theta^p(0)=F(p)}, and integral curves are unique, we get that {F\circ\theta^p(t)=\eta^{F(p)}(t)} at least on the domain of {\theta^p}.

Thus if {p\in M_t} then {F(p)\in N_t}, or equivalently {F(M_t)\subset N_t}. But we just wrote that {F(\theta^p(t))=\eta^{F(p)}(t)} where defined, which is just a different form of the equation {\eta_t\circ F=F\circ \theta_t(p)}.

We get a nice corollary out of this. If our function {F:M\rightarrow N} was actually a diffeo, then take {Y=F_*X} the pushforward, and we get that the flow of the pushforward is {\eta_t=F\circ \theta_t\circ F^{-1}} and the flow domain is actually equal {N_t=F(M_t)}.

In algebraic geometry we care a lot about families of things. In the differentiable world, the nicest case of this would be when you have a smooth submersion: {F: M\rightarrow N}, where {M} is compact and both are connected. Then since all values are regular, {F^{-1}(n_0)} is smooth embedded submanifold. If {N} were say {\mathbb{R}} (of course, {M} couldn’t be compact in this case), then we would have a nice 1-dimensional family of manifolds that are parametrized in a nice way.

It turns out to be quite easy to prove that in the above circumstance all fibers are diffeomorphic. In AG we often call this an “iso-trivial” family, although I’m not sure that is the best analogy. The proof basically comes down to the naturality of flows. Given any vector field {Y} on {N}, we can lift it to a vector field {X} on {M} that is {F}-related. I won’t do the details, but it can be done clearly in nice choice of coordinates {(x^1, \ldots, x^n)\mapsto (x^1, \ldots, x^{n-k})} and then just patch together with a partition of unity.

Let {M_x} be the notation for {F^{-1}(x)}. Fix an {x\in N}, then by the above naturality lemma {\theta_t\Big|_{M_x} : M_x\rightarrow M_{\eta_t(x)}} is well-defined and hence a diffeomorphism since it has smooth inverse {\theta_{-t}}. Let {y\in N}. Then as long as there is a vector field on {N} which flows {x} to {y}, then we’ve shown that {M_x\simeq M_y}, so since {x}, {y} were arbitrary, all fibers are diffeomorphic. But there is such a vector field, since {N} is connected.


Leave a comment

Handlebodies II

Let’s think back to our example to model our \lambda-handle (where \lambda is not a max or min). Well, it was a “saddle point”. So it consisted of a both a downward arc and upward arc. If you got close enough, it would probably look like D^1\times D^1.

Well, generally this will fit with our scheme. An n-handle looked like D^n … or better yet D^n\times 0, and a 0-handle looked like 0\times D^n, so maybe it is the case that a \lambda-handle looks like D^\lambda\times D^{n-\lambda}. Let’s call D^\lambda\times 0 the core of the handle, and D^{n-\lambda} the co-core.

By doing the same trick of writing out what our function looks like at a critical point of index \lambda in some small enough neighborhood using the Morse lemma, we could actually prove this, but we’re actually more interested now in how to figure out what happens with M_t as t crosses this point.

By that I mean, it is time to figure out what exactly it is to “attach a \lambda-handle” to the manifold.

Suppose as in the last post that c_i is a critical value of index \lambda. Then I propose that M_{c_i+\varepsilon} is diffeomorphic to M_{c_i-\varepsilon}\cup D^\lambda\times D^{m-\lambda} (sorry again, recall my manifold is actually m-dimensional with n critical values).

I wish I had a good way of making pictures to get some of the intuition behind this across. I’ll try in words. A 1-handle for a 3-manifold, will be D^1\times D^2, i.e. a solid cylinder. So we can think of this as literally a handle that we will bend the cylinder into, and attach those two ends to the existing manifold. This illustration is quite useful in bringing up a concern we should have. Attaching in this manner is going to create “corners” and we want a smooth manifold, so we need to make sure to smooth it out. But we won’t worry about that now, and we’ll just call the smoothed out M_{c_i-\varepsilon}\cup D^\lambda\times D^{m-\lambda}, say M'.

Let’s use our gradient-like vector field again. Let’s choose \varepsilon small enough so that we are in a coordinate chart centered at p_i such that f=-x_1^2-\cdots - x_\lambda^2 + x_{\lambda +1}^2+\cdots + x_m^2 is in standard Morse lemma form.

Let’s see what happens on the core D^\lambda\times 0. At the center, it takes the critical value c_i and it decreases everywhere from there (as we move from 0, only the first \lambda coordinates change). This decreasing goes all the way to the boundary where it is c_i-\varepsilon. Thus it is the upside down bowl (of dimension \lambda). Likewise, the co-core goes from the critical value and increases (as in the right side up bowl) to the boundary of a m-\lambda disk at a value c_i+\delta (where 0<\delta<\varepsilon).

Let's carefully figure out the attaching procedure now. If we think of our 3-manifold for intuition, we want to attach D^\lambda\times D^{m-\lambda} to M_{c_i-\varepsilon} by pasting \partial D^\lambda\times D^{m-\lambda} along \partial M_{c_i-\varepsilon}.

So I haven't talked about attaching procedures in this blog, but basically we want a map \phi: \partial D^\lambda\times D^{m-\lambda}\to \partial M_{c_i-\varepsilon} and then forming the quotient space of the disjoint union under the relation of identifying p\in \partial D^\lambda\times D^{m-\lambda} with \phi (p). Sometimes this is called an adjunction space.

So really \phi is a smooth embedding of a thickened sphere S^{\lambda - 1}, since \partial D^\lambda=S^{\lambda-1}. And the dimensions in which it was thickened is m-\lambda. Think about the "handle" in the 3-dimensional 1-handle case. We gave the two endpoints of line segment (two points = S^0) a 2-dimensional thickening by a disk.

Now it is the same old trick to get the diffeo. The gradient-like vector field, X, flows from \partial M' to \partial M_{c_i+\varepsilon}, so just multiply X by a smooth function that will make M' match M_{c_i+\varepsilon} after some time. This is our diffoemorphism and we are done.


1 Comment

Fundamental Theorem on Lie Algebra Actions

All we’re going to do now is try to take what we did in the last couple of posts and generalize. So instead of working on Lie groups where we have nice left invariance, and we had all our flows were complete, we’re going to try to get things for arbitrary vector fields on a manifold where the flow is not necessarily guaranteed to be complete.

First off, I’m going to want to think of right actions instead of left now. This is because in the last post I showed that flowing is the same thing as right multiplication by exp. From now on, I’m assuming we have a Lie group acting smoothly on a manifold on the right \theta(p,g)=p\cdot g. We want a global flow action of \mathbb{R} on our manifold M, so let X\in Lie(G). Define the action \mathbb{R}\times M \to M by t\cdot p=p\cdot exp(tX) (note: the two dots are two different actions, the one on the right comes from the Lie group).

We have an infinitesimal generator for this flow, say \widehat{X}\in\frak{X}(M). i.e. \widehat{X}_p=\frac{d}{dt}\big|_{t=0}p\cdot exp(tX). Thus we have a map \widehat{\theta}:Lie(G)\to\frak{X}(M) by \widehat{\theta}(X)=\widehat(X).

Let’s break down what this map really is. For any p\in M, examine \theta^p:G\to M by \theta^p(g)=p\cdot g. This is a smooth, since we can identify G\cong \{p\}\times G and then it is just inclusion followed by the smooth action. Thus this is the orbit map of the action. You get everything in the orbit of p by the action of G. Thus \widehat{X}_p=d(\theta^p)_e(X_e).

Let’s go one step further and show that X and \hat{X} are \theta^p related (note now that X, p, and the action are completely arbitrary, so this is really a very general statement).

By the group law p\cdot gh=(p\cdot g)\cdot h, we actually have \theta^p\circ L_g(h)=\theta^{p\cdot g}(h). Now we just check:

\widehat{X}_{p\cdot g}=d(\theta^{p\cdot g})_e(X_e)
= d(\theta^p)_g\circ d(L_g)_e(X_e)
= d(\theta^p)_g(X_g).

Which shows \theta^p related.

Now we easily can get that \widehat{\theta}: Lie(G)\to \frak{X}(M) is a Lie algebra hom. Using linearity of \widehat{\theta} for a fixed p, and the previous statement, we get that [\widehat{X}, \widehat{Y}]_p=d(\theta^p)_e([X, Y]_e)=\widehat{[X, Y]}_p. Thus [\widehat{\theta}, \widehat{\theta}]=\widehat{\theta}([X, Y]).

Now we are ready to state the “Fundamental Theorem on Lie Algebra Actions”. A quick term, we say a \frak{g}-action \widehat{\theta} is complete if \widehat{\theta}(X) is complete for all X.

The FT on LAA says that given any complete Lie(G)-action \widehat{\theta}:Lie(G)\to\frak{X}(M), there is a unique smooth right G-action on M whose infinitesimal generator is \widehat{\theta}.

I won’t prove this, but it was nice to state and a good ending place for the day.


1 Comment

More Exponential Properties

I’ll just do a quick finish up on proving basic properties about the exponential map for today, since I’ll be moving on to a different topic after this.

The exponential map actually restricts to a diffeomorphism from some neighborhood of 0 in \frak{g} to some neighborhood of e\in G. Well, if you’ve mastered your basic theorems, then this is no surprise. Last time we showed dexp_0(X)=X. The differential is the identity map and hence non-singular. Thus by the Inverse Function Theorem, there exists neighborhoods about 0 and e such that there is a smooth inverse, and hence exp restricted to these is a diffeo.

Probably the most interesting and non-obvious of these facts is the following one. Given another Lie group H (and Lie algebra \frak{h}), and a Lie group homomorphism F:G\to H, we get a commutative diagram (which I’ve yet to figure out how to make on wordpress). Anyway it just says that exp(F_*X)=F(exp(X)). So pushing forward a vector field and then exponentiating it is the same thing as exponentiating first and then doing the group hom.

Let’s do it by showing that exp(tF_*X)=F(exp(tX)) for all t\in\mathbb{R}. Recall that the LHS is just the one-parameter subgroup generated by F_*X. Thus by uniqueness, we just need to show that \gamma(t)=F(exp(tX)) is a hom that satisfies \gamma'(0)=(F_*X)_e. It is a composition of homs, so is itself one. Now we compute the derivative: \gamma'(0)=\frac{d}{dt}\big|_{t=0} F(exp(tX))
=dF_0\left(\frac{d}{dt}\big|_{t=0}exp(tX)\right)
=dF_0(X_e)
=(F_*X)_e.

The last property for today is used quite a bit as well. It says that flowing by a vector field X is the same thing as right multiplying by exp(tX). In other words, (\theta_X)t=R_{exp(tX)}.

First note that for any g\in G, the map \sigma(t)=L_g(exp(tX)), satisfies \sigma '(0)=dL_g (\frac{d}{dt}\big|_{t=0} exp(tX))=dL_g (X_e)=X_g by left invariance. Also, left multiplication takes integral curves to integral curves, so this is actually the integral curve of X starting at g. Thus L_g(exp(tX))=\theta^g_X(t).

Now we check what we wanted to show. R_{exp(tX)}(g)=g\cdot exp(tX)=L_g(exp(tX))=\theta^g_X(t)=(\theta_X)_t(g). And we’re done!

Next I’ll talk about infinitesimal generators of Lie group actions, so that I can pull in some stuff I did from the Frobenius theorem.

Update: There is some sort of weird bug where when I hit preview it tells me to save the draft. Then when I hit save draft, it posts. So, sorry if you’ve been getting unedited versions of my posts.


1 Comment

The Exponential Map

Notice from last time that the matrix exponential maps from the Lie algebra to the Lie group. It took an arbitrary matrix (i.e. a tangent vector at the identity) and mapped it to a non-singular matrix (i.e. a matrix in GL_n(\mathbb{R})). Now we want a map exp: \frak{g}\to G that does the same basic idea. Let’s unravel what that idea was.

It was sort of hidden, but we said that the one-parameter subgroup of GL_n(\mathbb{R}) generated by A, is F(t)=e^{tA}. So it took a line through the origin: t\mapsto tA, and sent it to a one-parameter subgroup F(t).

Let’s sort of work backwards. We have a one-parameter subgroup generated by X, say F(t) from the last post (the integral curve starting at the identity). From our matrix example, we also saw that if we flowed along this for 1 time unit, F(1)=e^X. Thus we’ll define exp(X)=F(1), where F is the one-parameter subgroup generated by X. Thus exp will be mapping from the Lie algebra to the Lie group.

Now we need to check the line through the origin property. i.e. Is F(s)=exp(sX) the one-parameter subgroup generated by X? Well, it is really a simple matter of rescaling. Let \overline{F}(t)=F(st) by rescaling. Then \overline{F}(t) is the integral curve of sX starting at e, so exp(sX)=\overline{F}(1)=F(s).

Now there are lots of properties we should check. I’ll go through a few this time and the rest tomorrow before moving on. First off, we hope that exp is a smooth map. In other words, in terms of the flow, we need \theta_X^e(1) to depend smoothly on X.

Define a vector field on G\times\frak{g} by Y_{(g, X)}=(X_g, 0). Note we made the natural identification T_{(g, X)}(G\times \frak{g})\cong T_gG\oplus T_X\frak{g}. Let X_1, \ldots, X_k be a basis for \frak{g} and (x^i) the coordinates for \frak{g}, x^iX_i. Also, let (w^i) be smooth coordinates for G.

Let f\in C^{\infty}(G\times \frak{g}). Then in coordinates Y(f(w^i, x^i))=\sum_j x^jX_jf(w^i, x^i). Each X_j is a derivative in the w^j direction, so the expression depends smoothly on (w^i, x^i). Hence Y is a smooth vector field.

Now the flow of Y, say \Theta, is \Theta_t(g, X)=(\theta_X(t, g), X). Now the fact that \Theta is a flow of a smooth map shows that it is smooth. Let \pi_G:G\times \frak{g}\to G be projection. Then exp(X)=\pi_G(\Theta_1(e, X)) and hence is a composition of smooth maps so is itself smooth.

That was sort of exhausting, so I’ll just do one quick other property before quitting: exp((s+t)X)=exp(sX)exp(tX). This is just because t\mapsto exp(tX) is a one-parameter subgroup and hence homomorphism. The group structure on \mathbb{R} is additive and the Lie group operation is multiplicative: exp((s+t)X)=F(s+t)=F(s)F(t)=exp(sX)exp(tX).

Alright, one more quick one, since that one shouldn’t count. This one is a big one, in that it corresponds with a very characteristic property of the complex valued exponential. If we identify T_0\frak{g} and T_eG with \frak{g}, then d\exp_0:T_0\frak{g}\to T_eG is the identity map!

Let \gamma(t)=tX. Then \gamma'(0)=X. Thus dexp_0(X)=dexp_0(\gamma'(0))=(exp\circ\gamma)'(0)=\frac{d}{dt}\big|_{t=0}exp(tX)=X (the equality comes from the fact that exp is the flow).

I guess this ran a bit long. Oh well, time is running out.

Follow

Get every new post delivered to your Inbox.

Join 142 other followers