The Functor of Points Revisited

Mike Hopkins is giving the Milliman Lectures this week at the University of Washington and the first talk involved this idea that I’m extremely familiar with, but am also surprised at how unfamiliar most mathematicians are with it. I’ve made almost this exact post several other times, but it bears repeating. As I basked in the amazingness of this idea during the talk, I couldn’t help but notice how annoyed some people seemed to be at the level of abstractness and generality this notion forces on you.

Every branch of math has some crowning achievements and insights into how to actually think about something so that it works. The idea I’ll present in this post is a truly remarkable insight into geometry and topology. It is incredibly simple (despite the daunting language) which is what makes it so fascinating. Here is the idea. Suppose you care about some type of spaces (metric, topological, manifolds, varieties, …).

Let {X} be one of your spaces. In order to figure out what {X} is you could probe it by other spaces. What does this mean? It just means you look at maps {Y\rightarrow X}. If {X} is a topological space, then you can recover the points of {X} by considering all the maps from a singleton (i.e. point) {\{x\} \rightarrow X}. If you want to understand more about the topology, then you probe by some other spaces. Simple.

Even analysts use this idea all the time. A distribution {\phi} (on {\mathbb{R}}) is not a well-defined function, so you can’t just tell whether or not two distributions are the same by looking at values. Instead you probe it by test functions {\int \phi f dx}. If these probes give you the same thing for all test functions, then the distributions are the same. This is all we are doing with our spaces above, and this is all the Yoneda lemma is saying. It says that if the maps (test functions) to {X} and the maps to {Y} are the same, then {X} and {Y} are the same.

We can fancy up the language now. Considering maps to {X} is a functor {Hom(-,X): Spaces^{op} \rightarrow Set}. Such a functor is called a presheaf on the category of Spaces (recall, that for your particular situation this might be the category of smooth manifolds or metric spaces or algebraic varieties or …). Don’t be scared. This is literally the definition of presheaf, so if you were following to now, then introducing this term requires no new definitions.

The Yoneda lemma is saying something very simple in this fancy language. It says that there is a (fully faithful) embedding of Spaces into Pre(Spaces), the category of presheaves on Spaces. If we now work with this new category of functors, we just enlarge what we consider to be a space and this is of fundamental importance for many reasons. If {X} is one of our old spaces, then we can just naturally identify it with the presheaf {Hom(-,X)}. The reason Mike Hopkins is giving for why this is important is very different from the one I’ll give which just goes to show how incredibly useful this idea is.

In every single branch of math people care about some sort of classification problem. Classify all elliptic curves. What are the vector bundles on my manifold? If I fix a vector bundle, what are the connections on my vector bundle? What are the Borel measures on my metric space? The list goes on forever.

In general, classification is a hugely impossible task to grapple with. We know a ton of stuff about smooth manifolds, but how can we leverage that to make the seemingly unrelated problem of classifying vector bundles more manageable? Here our insight comes to the rescue, because there is a way to write down a functor that outputs vector bundles. There is subtlety in writing it down properly (and we should now land in Grpds instead of Set so that we can identify isomorphic ones), but once we do this we get a presheaf. In other words, we make a (generalized) space whose points are the objects we are classifying.

In many situations you then go on to prove that this moduli space of vector bundles is actually one of the original types of spaces (or not too far from one) we know a lot about. Now our impossible task of understanding what the vector bundles on my manifold are is reduced to the already studied problem of understanding the geometry of a manifold itself!

Here is my challenge to any analyst who knows about measures. Warning, this could be totally ridiculous and nonsense because it is based on reading Wikipedia for 5 minutes. Construct a presheaf of real-valued Radon measures on {\mathbb{R}}. Analyze this “space”. If it was done right, you should somehow recover that the space is the dual space to the convex space, {C_c(\mathbb{R})}, of compactly supported real-valued functions on {\mathbb{R}}. Congratulations, you’ve just started a new branch of math in which you classify measures on a space by analyzing the topology/geometry of the associated presheaf.

Advertisements

Abelianization of the Fundamental Group

I guess I have no reason to offer explanation for lack of posting, but in general this has been one of the best weeks ever and at the same time one of the worst. The worst because I’ve been fairly ill and can’t seem to fully conquer it. It has been the best week for reasons I won’t mention, since I try to keep personal stuff out of this blog as much as possible (but if you know of my other blog which is purely my personal stuff, then you can read about it to your heart’s content, but I refuse to give any hints at all as to how to find that). Both of these factors has lead to a fairly unproductive week.

I may take a more algebraic topology approach for awhile. This is mainly since I’m doing a reading course on Hatcher (with two other students), and before I go present stuff to them and the prof I want to clarify my ideas.

Tomorrow I’m presenting the proof that H_1(X)\cong \pi_1(X)^{ab} for path connected spaces. This is a pretty wonderful result if you think about it. We have exactly how first homology and the fundamental group relate. In fact, the first thing you’d think to do (granted, this might take a little while) is the thing that works.

We can naturally think of paths and singular 1-simplices as the same thing, since they are both just continuous maps to the space out of a closed interval. So after rescaling, a loop f:[0,1]\to X is actually also a 1-cycle since \partial f=f(1)-f(0)=0.

The overall idea of this proof is then to show that h: \pi_1(X, x_0)\to H_1(X) is a well-defined homomorphism with image all of H_1(X) and kernel the commutator subgroup. Almost all of these facts are fairly straightforward.

First, we’ll need a few ways in which our different modes of thinking about loops versus 1-cycles correlate. If as a path f\equiv c, a constant, then f\sim 0 the cycle is homologous to 0. If two paths are homotopic (in the path homotopic and hence equivalence class of \pi_1(X) sense), denoted f\simeq g then they are homologous. Concatenation of paths (and hence the operation in the fundamental group) is homologous to addition of the cycles (the operation in first homology). Lastly, traversing a path backwards is homologous to negating the cycle: \overline{f}\sim -f.

So we’ll use these four facts without proof, since they are fairly standard and the proof is long enough as it is.

Recall the definition h([f])=f. The second fact, gives us that h is well-defined since any other representative of the equivalence class will be homotopic to the original, and hence the outputs will be homologous.

The third fact gives us that h is a homomorphism of groups.

Our first bit of effort comes from showing that h is surjective. Here we will use the path-connected hypothesis (everything else so far is true without it). Let \sum n_i\sigma_i be any 1-cycle. We must construct a loop that maps to it.

Since the n_i are integers, we can assume each is \pm 1 by just repeating the \sigma_i as many times as needed. But all the \sigma_i with -1 in front can be replaced by -\overline{\sigma_i} by the fourth property. This converts all the n_i to 1. Thus \sum n_i\sigma_i\sim \sum \sigma_k.

But \partial(\sum \sigma_k)=0, so all the endpoints must cancel. So for any \sigma_k that is not a loop, in order to cancel \sigma_k(1), there must be a \sigma_j such that \sigma_j(0)=\sigma_k(1). i.e. there is some \sigma_j that we can concatenate with to form \sigma_k\cdot \sigma_j. In order to cancel the \sigma_k(0) some other \sigma_j must exists with endpoint \sigma_j(1)=\sigma_k(0).

So we can concatenate, then rescale, and group all of these cycles into a collection of loops by the third property. So the only remaining thing we must do is get it to be a single loop. But X is path-connected, so pick some basepoint x_0\in X. For any of these possibly disjoint loops floating around, we can pick a basepoint at each and connect with a path \gamma_i from x_0 to the basepoint of the i-th loop. By the third and fourth properties \gamma_i\cdot \sigma_i\cdot \overline{\gamma_i}\sim \sigma_i. So now all loops start and end at x_0 and we can combine into a single loop \sigma. Thus h([\sigma])=\sum n_i\sigma_i.

Now comes the hard part. We want ker h=\pi_1(X, x_0)'. The one containment is easy. Since H_1(X) is abelian, by the universal property of the commutator subgroup, \pi_1(X)'\subset ker h. The method to get the other direction is to show that for any h([f])=0, we must have that [f] is trivial in the abelianization.

Suppose [f]\in\pi_1(X) such that h([f])=0. Since f is a cycle, there is some 2-chain \sum n_i\sigma_i such that \partial (\sum n_i\sigma_i)=f. So as before, we can assume each n_i=\pm 1. Now the goal is to associate a 2-dimensional \Delta-complex to \sum n_i\sigma_i by taking for each \sigma_i a \Delta_i^2 and identifying pairs of edges which we’ll call K.

So before writing this process down, we should examine what the process will be geometrically. It turns out that K will be an orientable compact surface with boundary, since we are just fitting together a finite collection of disjoint 2-simplices (this is not meant to be obvious). The component containing the boundary is a closed orientable surface with an open disk removed. Since connected sums of tori can be expressed as a 2n-gon with pairs of edges identified in the manner aba^{-1}b^{-1}, cdc^{-1}d^{-1} etc, we see that f is homotopic to a product of commutators.

Writing this in detail algebraically is much trickier. Given any \sigma_i, we have \partial \sigma_i=\tau_{i0}-\tau_{i1}+\tau_{i2}, where \tau_{ij} are singular 1-simplices. Thus f=\partial(\sum n_i\sigma_i)=\sum (-1)^j n_i\tau_{ij}.

Keep the picture of a triangle in your head. When we fit together the triangles we are getting pairs of edges. The signs on these pairs are opposite and so will cancel when we sum. The remaining (of the three sides) \tau_{ij} is a copy of f. This forms our \Delta-complex K.

Now form \sigma : K\to X by fitting together the \sigma_i maps. Deform \sigma relative the edges that correspond to f by mapping each vertex to x_0. So we have a homotopy on the union of the 0-skeleton with edge f, so by the homotopy extension property we get a homotopy on all of K.

Now restrict \sigma to the simplices \Delta_i^2 to get a new chain \sum n_i\sigma_i with boundary f and \tau_{ij} loops at x_0.

Now we just need to check whether the class is trivial or not: [f]=\sum (-1)n_i[\tau_{ij}]=\sum n_i [\partial \sigma_i] where [\partial \sigma_i]=[\tau_{i0}]-[\tau_{i1}]+[\tau_{i2}]. But \sigma_i gives a nullhomotopy of \tau_{i0}-\tau_{i1}+\tau_{i2} and we are done.

Thus ker h=\pi_1(X, x_0)' and by the First Iso Theorem we have H_1(X)\cong \pi_1(X, x_0)^{ab}.