The Stack of Pitch Class Sets

I know it’s been a while since I’ve talked about either of these topics, but I’ve always been meaning to point something funny out. I thought I might formally work it out and write it up to submit to a music theory journal, but no one would probably accept it anyway. So I’ll sketch the idea now. Back here I talked about stacks as a useful way to generalize what we mean by a “space.” Back here I talked about the math behind the idea of pitch class sets.

I know Mazzola wrote a whole book on using topos theory in music, but I’ve never dug into it very deeply. I fully admit this is probably just a special case of something from that book. But it’s always useful to work out special cases.

Recall that a pitch set (or chord) is just converting notes to numbers: 0 is C, 1 is C#, 2 is D, etc. A given collection of pitches can be expressed in a more useful notation when there isn’t a key we’re working in. For example, a C major chord is (047).

A pitch class set is then saying that there are collections of these we want to consider to be the same. For one, our choice of 0 is completely arbitrary. We could have set 0 is A, and we should get the same theory. This amounts to identifying all pitch sets that are the same after translation.

We also want to identify sets that are the same after inversion. In the previous post on this topic, I showed that if we label the vertices of a dodecagon, this amounts to a reflection symmetry. The reflections together with the translations generate the dihedral group {D_{12}}, so we are secretly letting {D_{12}} act on the set of all tuples of numbers 0 to 11, where each number only appears once and without loss of generality we can assume they are in increasing order.

Thus a pitch class set is just an equivalence class of a chord under this group action. It is not the direction I want this post to go, but given such a class, there is always a unique representative that is usually called the “prime form” (basically the most “compact” representative starting with 0).

Here’s where we get to the part I never really worked out. The set of all “chords” should have some sort of useful topology on it. For example, (0123) should be related to (0124), because they are the same chord except for one note. I don’t think doing something obvious like defining a distance based on the coordinates works. If you try to construct the lattice of open sets by hand based on your intuition, a definition might become more obvious. Call this space of chords {X}.

Now we have a space with a group action on it. One might want to merely form the quotient space {X \rightarrow X/G}. This will be 24 to 1 at most points, but it will also forget which chords were fixed by elements of the group. Part of the “theory” in music theory is to remember that information. This is why I propose making the quotient stack {[X/G]}. It seems like an overly complicated thing to do, but here’s what you gain.

You now have a “space” whose points are the pitch class sets. If that class contains 24 distinct chords, then the point is an “honest” point with no extra information. The fiber of the quotient map contains the 24 chords, and you get to each of them by acting by the elements of {D_{12}} (i.e. it is a torsor under {D_{12}}). Now consider something like the pitch class set [0,2,4,6,8,10]. The fiber of the quotient map only contains {2} elements: (02468T) and (13579E). The stack will tag these points with {D_6}, which is the subgroup of symmetries which send this chord to itself.

pitch stack

Now that I’ve drawn this, I can see that many of you will be skeptical about the simplicity. Think of it this way. The bottom thing is the space I’m describing. Each point in the space is tagged with the prime form representative together with the subgroup of symmetries that preserve the class. That’s pretty simple. Yet it remembers all of the complicated music theory of the top thing! If the topology was defined well, then studying this space may even lead to insights on how symmetries of classes are related to each other. Let me know if anyone has seen anything like this before.


An Application of p-adic Volume to Minimal Models

Today I’ll sketch a proof of Ito that birational smooth minimal models have all of their Hodge numbers exactly the same. It uses the {p}-adic integration from last time plus one piece of heavy machinery.

First, the piece of heavy machinery: If {X, Y} are finite type schemes over the ring of integers {\mathcal{O}_K} of a number field whose generic fibers are smooth and proper, then if {|X(\mathcal{O}_K/\mathfrak{p})|=|Y(\mathcal{O}_K/\mathfrak{p})|} for all but finitely many prime ideals, {\mathfrak{p}}, then the generic fibers {X_\eta} and {Y_\eta} have the same Hodge numbers.

If you’ve seen these types of hypotheses before, then there’s an obvious set of theorems that will probably be used to prove this (Chebotarev + Hodge-Tate decomposition + Weil conjectures). Let’s first restrict our attention to a single prime. Since we will be able to throw out bad primes, suppose we have {X, Y} smooth, proper varieties over {\mathbb{F}_q} of characteristic {p}.

Proposition: If {|X(\mathbb{F}_{q^r})|=|Y(\mathbb{F}_{q^r})|} for all {r}, then {X} and {Y} have the same {\ell}-adic Betti numbers.

This is a basic exercise in using the Weil conjectures. First, {X} and {Y} clearly have the same Zeta functions, because the Zeta function is defined entirely by the number of points over {\mathbb{F}_{q^r}}. But the Zeta function decomposes

\displaystyle Z(X,t)=\frac{P_1(t)\cdots P_{2n-1}(t)}{P_0(t)\cdots P_{2n}(t)}

where {P_i} is the characteristic polynomial of Frobenius acting on {H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)}. The Weil conjectures tell us we can recover the {P_i(t)} if we know the Zeta function. But now

\displaystyle \dim H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)=\deg P_i(t)=H^i(Y_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)

and hence the Betti numbers are the same. Now let’s go back and notice the magic of {\ell}-adic cohomology. If {X} and {Y} are as before over the ring of integers of a number field. Our assumption about the number of points over finite fields being the same for all but finitely many primes implies that we can pick a prime of good reduction and get that the {\ell}-adic Betti numbers of the reductions are the same {b_i(X_p)=b_i(Y_p)}.

One of the main purposes of {\ell}-adic cohomology is that it is “topological.” By smooth, proper base change we get that the {\ell}-adic Betti numbers of the geometric generic fibers are the same

\displaystyle b_i(X_{\overline{\eta}})=b_i(X_p)=b_i(Y_p)=b_i(Y_{\overline{\eta}}).

By the standard characteristic {0} comparison theorem we then get that the singular cohomology is the same when base changing to {\mathbb{C}}, i.e.

\displaystyle \dim H^i(X_\eta\otimes \mathbb{C}, \mathbb{Q})=\dim H^i(Y_\eta \otimes \mathbb{C}, \mathbb{Q}).

Now we use the Chebotarev density theorem. The Galois representations on each cohomology have the same traces of Frobenius for all but finitely many primes by assumption and hence the semisimplifications of these Galois representations are the same everywhere! Lastly, these Galois representations are coming from smooth, proper varieties and hence the representations are Hodge-Tate. You can now read the Hodge numbers off of the Hodge-Tate decomposition of the semisimplification and hence the two generic fibers have the same Hodge numbers.

Alright, in some sense that was the “uninteresting” part, because it just uses a bunch of machines and is a known fact (there’s also a lot of stuff to fill in to the above sketch to finish the argument). Here’s the application of {p}-adic integration.

Suppose {X} and {Y} are smooth birational minimal models over {\mathbb{C}} (for simplicity we’ll assume they are Calabi-Yau, Ito shows how to get around not necessarily having a non-vanishing top form). I’ll just sketch this part as well, since there are some subtleties with making sure you don’t mess up too much in the process. We can “spread out” our varieties to get our setup in the beginning. Namely, there are proper models over some {\mathcal{O}_K} (of course they aren’t smooth anymore), where the base change of the generic fibers are isomorphic to our original varieties.

By standard birational geometry arguments, there is some big open locus (the complement has codimension greater than {2}) where these are isomorphic and this descends to our model as well. Now we are almost there. We have an etale isomorphism {U\rightarrow V} over all but finitely many primes. If we choose nowhere vanishing top forms on the models, then the restrictions to the fibers are {p}-adic volume forms.

But our standard trick works again here. The isomorphism {U\rightarrow V} pulls back the volume form on {Y} to a volume form on {X} over all but finitely primes and hence they differ by a function which has {p}-adic valuation {1} everywhere. Thus the two models have the same volume over all but finitely many primes, and as was pointed out last time the two must have the same number of {\mathbb{F}_{q^r}}-valued points over these primes since we can read this off from knowing the volume.

The machinery says that we can now conclude the two smooth birational minimal models have the same Hodge numbers. I thought that was a pretty cool and unexpected application of this idea of {p}-adic volume. It is the only one I know of. I’d be interested if anyone knows of any other.

Volumes of p-adic Schemes

I came across this idea a long time ago, but I needed the result that uses it in its proof again, so I was curious about figuring out what in the world is going on. It turns out that you can make “{p}-adic measures” to integrate against on algebraic varieties. This is a pretty cool idea that I never would have guessed possible. I mean, maybe complex varieties or something, but over {p}-adic fields?

Let’s start with a pretty standard setup in {p}-adic geometry. Let {K/\mathbb{Q}_p} be a finite extension and {R} the ring of integers of {K}. Let {\mathbb{F}_q=R_K/\mathfrak{m}} be the residue field. If this scares you, then just take {K=\mathbb{Q}_p} and {R=\mathbb{Z}_p}.

Now let {X\rightarrow Spec(R)} be a smooth scheme of relative dimension {n}. The picture to have in mind here is some smooth {n}-dimensional variety over a finite field {X_0} as the closed fiber and a smooth characteristic {0} version of this variety, {X_\eta}, as the generic fiber. This scheme is just interpolating between the two.

Now suppose we have an {n}-form {\omega\in H^0(X, \Omega_{X/R}^n)}. We want to say what it means to integrate against this form. Let {|\cdot |_p} be the normalized {p}-adic valuation on {K}. We want to consider the {p}-adic topology on the set of {R}-valued points {X(R)}. This can be a little weird if you haven’t done it before. It is a totally disconnected, compact space.

The idea for the definition is the exact naive way of converting the definition from a manifold to this setting. Consider some point {s\in X(R)}. Locally in the {p}-adic topology we can find a “disk” containing {s}. This means there is some open {U} about {s} together with a {p}-adic analytic isomorphism {U\rightarrow V\subset R^n} to some open.

In the usual way, we now have a choice of local coordinates {x=(x_i)}. This means we can write {\omega|_U=fdx_1\wedge\cdots \wedge dx_n} where {f} is a {p}-adic analytic on {V}. Now we just define

\displaystyle \int_U \omega= \int_V |f(x)|_p dx_1 \cdots dx_n.

Now maybe it looks like we’ve converted this to another weird {p}-adic integration problem that we don’t know how to do, but we the right hand side makes sense because {R^n} is a compact topological group so we integrate with respect to the normalized Haar measure. Now we’re done, because modulo standard arguments that everything patches together we can define {\int_X \omega} in terms of these local patches (the reason for being able to patch without bump functions will be clear in a moment, but roughly on overlaps the form will differ by a unit with valuation {1}).

This allows us to define a “volume form” for smooth {p}-adic schemes. We will call an {n}-form a volume form if it is nowhere vanishing (i.e. it trivializes {\Omega^n}). You might be scared that the volume you get by integrating isn’t well-defined. After all, on a real manifold you can just scale a non-vanishing {n}-form to get another one, but the integral will be scaled by that constant.

We’re in luck here, because if {\omega} and {\omega'} are both volume forms, then there is some non-vanishing function such that {\omega=f\omega'}. Since {f} is never {0}, it is invertible, and hence is a unit. This means {|f(x)|_p=1}, so since we can only get other volume forms by scaling by a function with {p}-adic valuation {1} everywhere the volume is a well-defined notion under this definition! (A priori, there could be a bunch of “different” forms, though).

It turns out to actually be a really useful notion as well. If we want to compute the volume of {X/R}, then there is a natural way to do it with our set-up. Consider the reduction mod {\mathfrak{m}} map {\phi: X(R)\rightarrow X(\mathbb{F}_q)}. The fiber over any point is a {p}-adic open set, and they partition {X(R)} into a disjoint union of {|X(\mathbb{F}_q)|} mutually isomorphic sets (recall the reduction map is surjective here by the relevant variant on Hensel’s lemma). Fix one point {x_0\in X(\mathbb{F}_q)}, and define {U:=\phi^{-1}(x_0)}. Then by the above analysis we get

\displaystyle Vol(X)=\int_X \omega=|X(\mathbb{F}_q)|\int_{U}\omega

All we have to do is compute this integral over one open now. By our smoothness hypothesis, we can find a regular system of parameters {x_1, \ldots, x_n\in \mathcal{O}_{X, x_0}}. This is a legitimate choice of coordinates because they define a {p}-adic analytic isomorphism with {\mathfrak{m}^n\subset R^n}.

Now we use the same silly trick as before. Suppose {\omega=fdx_1\wedge \cdots \wedge dx_n}, then since {\omega} is a volume form, {f} can’t vanish and hence {|f(x)|_p=1} on {U}. Thus

\displaystyle \int_{U}\omega=\int_{\mathfrak{m}^n}dx_1\cdots dx_n=\frac{1}{q^n}

This tells us that no matter what {X/R} is, if there is a volume form (which often there isn’t), then the volume

\displaystyle Vol(X)=\frac{|X(\mathbb{F}_q)|}{q^n}

just suitably multiplies the number of {\mathbb{F}_q}-rational points there are by a factor dependent on the size of the residue field and the dimension of {X}. Next time we’ll talk about the one place I know of that this has been a really useful idea.

Topological Modular Forms

This will be my first and last post on this topic, since it will take us too far from the theme for this year which is arithmetic geometry. It took awhile for me to write this because something feels wrong in the last post and I wanted to correct it before doing this one. Unfortunately, I can only make a guess at what is happening. I’ll explain it when it comes up. Thus as a warning everything in the last post and in this post should be taken as approximately true (of course, this is a blog, so this warning should probably always be in place).

Recall briefly that we now have a description of (weight {2}) modular forms just as a global {1}-form on a certain moduli space of elliptic curves with level {N} structure. To get the weight {2k} modular forms we just take tensor powers, so {H^0(X_0(N), \Omega^{\otimes k})\simeq M_{2k}(\Gamma_0(N))}. It is funny to notice that a priori it is completely unclear that the collection of all modular forms of a fixed level should form a graded commutative ring, but with this description it falls right out. We define the graded ring of modular forms of level {N} to be {\displaystyle M(\Gamma_0(N))=\bigoplus_{k=1}^\infty H^0(X_0(N), \Omega^{\otimes k})=\bigoplus_{k=1}^\infty M_{2k}(\Gamma_0(N))}.

Now notice that if we take {N=1} in our moduli problem we are just taking an elliptic curve plus a cyclic subgroup of order {1}, i.e. we are marking the identity. Thus what ought to be the case is that {\overline{\mathcal{M}_{1,1}}=X_0(1)}. This is the part that confuses me. Last time I said that {X_0(N)} was a smooth Riemann surface, but {\overline{\mathcal{M}_{1,1}}} is a DM stack. My guess at what is going on is that since we only defined the moduli functor for {X_0(N)} for elliptic curves over {\mathbb{C}}, we are maybe just taking the {\mathbb{C}}-valued points. Thus maybe {\overline{\mathcal{M}_{1,1}}(\mathbb{C})\simeq X_0(1)}. In any case, there is certainly some relation between the two so it isn’t unreasonable to try to figure out what happens when we replace {X_0(1)} with {\overline{\mathcal{M}_{1,1}}}.

Now we’ll start the crazy generalizations. There is something called a derived DM stack. Since it would take a lot to define, we’ll just say that it is one of these things where “{\infty}-blah” gets thrown around. The important idea here is that we can take {\pi_0} and get back an honest DM stack. The big theorem of Hopkins, Miller, and Lurie is that there exists a derived DM stack {(\mathcal{M}, \mathcal{O})} whose underlying DM stack is {\overline{\mathcal{M}_{1,1}}} such that {\pi_{2k}\mathcal{O}\simeq \omega^k} and {\pi_{2k+1}\mathcal{O}=0}.

Now “tmf” is something called a commutative ring spectrum and it is formed by taking the derived global sections of {\mathcal{O}}. Generalities give us a descent spectral sequence {H^s(\mathcal{M}, \omega^t)\Rightarrow \pi_{2t+s}\mathbf{tmf}}. An open and interesting question is to determine which modular forms give homotopy classes in {\mathbf{tmf}} since the classical modular forms form the {0}-row of this spectral sequence. I’ll just end by pointing out how mind boggling this is. Modular forms have had such great success in number theory. Now they are successfully being used to understand homotopy groups of spheres and other extremely topological questions. The reverse has been done as well. Topological methods can transfer information back to modular forms and give number theoretic theorems such as congruence relations between {p}-adic modular forms. How amazing!

Classical Local Systems

I lied to you a little. I may not get into the arithmetic stuff quite yet. I’m going to talk about some “classical” things in modern language. In the things I’ve been reading lately, these ideas seem to be implicit in everything said. I can’t find this explained thoroughly anywhere. Eventually I want to understand how monodromy relates to bad reduction in the {p}-adic setting. So we’ll start today with the different viewpoints of a local system in the classical sense that are constantly switched between without ever being explained.

You may need to briefly recall the old posts on connections. The goal for the day is to relate the three equivalent notions of a local system, a vector bundle plus flat connection on it, and a representation of the fundamental group. There may be some inaccuracies in this post, because I can’t really find this written anywhere and I don’t fully understand it (that’s why I’m making this post!).

Since I said we’d work in the “classical” setting, let’s just suppose we have a nice smooth variety over the complex numbers, {X}. In this sense, we can actually think about it as a smooth manifold, or complex analytic space. If you want, you can have the picture of a Riemann surface in your head, since the next post will reduce us to that situation.

Suppose we have a vector bundle on {X}, say {E}, together with a connection {\nabla : E\rightarrow E\otimes \Omega^1}. We’ll fix a basepoint {p\in X} that will always secretly be lurking in the background. Let’s try to relate this this connection to a representation of the fundamental group. Well, if we look at some old posts we’ll recall that a choice of connection is exactly the same data as telling you “parallel transport”. So what this means is that if I have some path on {X} it tells me how a vector in the fiber of the vector bundle moves from the starting point to the ending point.

Remember, that we fixed some basepoint {p} already. So if I take some loop based at {p} say {\sigma}, then a vector {V\in E_p} can be transported around that loop to give me another vector {\sigma(V)\in E_p}. If my vector bundle is rank {n}, then {E_p} is just an {n}-dimensional vector space and I’ve now told you an action of the loop space based at {p} on this vector space.

Visualization of a vector being transported around a loop on a torus (yes, I’m horrible at graphics, and I couldn’t even figure out how to label the other vector at p as \sigma (V)):

This doesn’t quite give me a representation of the fundamental group (based at {p}), since we can’t pass to the quotient, i.e. the transport of the vector around a loop that is homotopic to {0} might be non-trivial. We are saved if we started with a flat connection. It can be checked that the flatness assumption gives a trivial action around nullhomotopic loops. Thus the parallel transport only depends on homotopy classes of loops, and we get a group homomorphism {\pi_1(X, p)\rightarrow GL_n(E_p)}.

Modulo a few details, the above process can essentially be reversed, and hence given a representation you can produce a unique pair {(E,\nabla)}, a vector bundle plus flat connection associated to it. This relates the latter two ideas I started with. The one that gave me the most trouble was how local systems fit into the picture. A local system is just a locally constant sheaf of {n}-dimensional vector spaces. At first it didn’t seem likely that the data of a local system should be equivalent to these other two things, since the sheaf is locally constant. This seems like no data at all to work with rather than an entire vector bundle plus flat connection.

Here is why algebraically there is good motivation to believe this. Recall that one can think of a connection as essentially a generalization of a derivative. It is just something that satisfies the Leibniz rule on sections. Recall that we call a section, {s}, horizontal for the connection if {\nabla (s)=0}. But if this is the derivative, this just means that the section should be constant. In this analogy, we see that if we pick a vector bundle plus flat connection, we can form a local system, namely the horizontal sections (which are the locally constant functions). If you want an exercise to see that the analogy is actually a special case, take the vector bundle to be the globally trivial line bundle {\mathcal{O}_X} and the connection to be the honest exterior derivative {d:\mathcal{O}_X\rightarrow \Omega^1}.

The process can be reversed again, and given any locally constant sheaf of vector spaces, you can cook up a vector bundle and flat connection whose horizontal sections are precisely the sections of the sheaf. Thus our three seemingly different notions are actually all equivalent. I should point out that part of my oversight on the local system side was thinking that a locally constant sheaf somehow doesn’t contain much information. Recall that it is still a sheaf, so we can be associating lots of information on large open sets and we still have restriction homomorphisms giving data as well. Next time we’ll talk about some classical theorems in differential equation theory that are most easily proved and stated in this framework.

Gerbes 3: Another Example and Some Caution

This might be my last post on gerbes (explicitly for gerbe’s sake), so as in my last ‘stacks for stack’s sake’ post I’ll try to clarify some things with more examples and then give some cautions. Last time I mentioned the classifying stack {BA}. Let’s first actually construct it better than the quick idea I gave.

Let {B} be a topological space, and {A} a sheaf of abelian groups on {B} (note that I’ll use {A} instead of {\mathcal{A}} to avoid typing the script, but it is a {\mathit{sheaf}} and not just a group, otherwise we’ll just recover the classifying space).

Define a functor {BA: \text{Top}(B)\rightarrow \text{Grpds}}, where {\text{Grpds}} is the category of groupoids, by {BA(U)=} groupoid of {A_U}-torsors over {U}. This is a sheaf, say {T}, on {U} with an action {A_U\times T\rightarrow T} such that if {T(V)\neq \emptyset}, then {A_U(V)} acts simply transitively on {T(V)}.

Again, this is just fancy language for something that is probably familiar to you. Since we have a sheaf of groups, just think an open set at a time. {A_U(V)} is a group, call it {G}. Then {G\times T\rightarrow T} is really just an honest group action, and “acting simply transitively” means that if we pick out some {t\in T}, then we have a way to identify {G} with {T}, namely {G\stackrel{\sim}{\rightarrow} T} (as sets) via the action {g\mapsto g\cdot t}.

You could also think of this as a “relative” principal bundle. The group that it is a principal bundle of gets to change locally, but if it is a constant sheaf and hence not changing, then we really do just get the classifying space.

I told you {BA} as a functor to Grpds and not as a functor to {\text{Top}(B)} which is how stacks were defined, but recall if we have a sheaf on {B}, then we can convert it to that form by taking our category to have objects the pairs {\{(s, U)\}} where {s\in BA(U)}, and the maps in the category are inclusions and restricting to the right thing. If we were doing all the details we’d have to check all of this and then check it is actually a gerbe and that it is actually an {A}-gerbe, etc, but we’d be stuck here forever and these are all straightforward enough that it would make a great exercise if you don’t see it right away.

Recall last time that an {A}-gerbe, {G}, is isomorphic to {BA} if and only if it has a global object. Recall that {\text{Vect}^1}, the stack of rank one vector bundles, was a {\mathbb{G}_m}-gerbe, and it has the trivial bundle as a global object, so {B\mathbb{G}_m\simeq \text{Vect}^1}.

Let’s actually prove it now. If {G\simeq BA}, then {A(B) \in BA(B)} and hence {G(B)\neq \emptyset}. For the reverse direction, suppose there is some {s\in G(B)}, then we get a map {G\rightarrow BA} which we’ll denote {t\mapsto \text{Isom}(t,s)}. One can check that this induces the isomorphism. In fact, one can check that whenever you have a map {G_1\rightarrow G_2} in the category of {A}-gerbes, it will be an isomorphism.

This is why I wanted to bring this example up. Here are some of the cautions that jump to my mind. Something might feel fishy to you right now. That’s because I haven’t really told you the proper way to think about these things. When I say “isomorphism” what does that mean? Well, it really means as {2}-categories.

Also, suppose you are an algebraic geometer and you say you have a gerbe on the étale site of {X}. This isn’t precise enough, since funny differences can happen whether or not you’re on the big or small site. I guess because of all that I’ve left out in an attempt to bring the concept out, my main caution is to consult the literature and not any of these blog posts if you want to know if something is true.

Gerbes 2: The Motivation

I’m going to make another definition, but I may as well get to the punchline first or else anyone reading this that doesn’t already know the punchline is going to skip reading it or tune out. If you have an abelian sheaf {\mathcal{A}} on {X}, then there is a notion of {\mathcal{S}} being not only a stack/gerbe over {X}, but an {\mathcal{A}}-gerbe. I’ll define this later.

Here’s the amazing part, Giraud did a lot of work for us and tells us that the global elements of the stack, {\mathcal{S}}, i.e. the objects lying over all of {X} are in bijection with elements of {H^1(X, \mathcal{A})}. Take the line bundle example, then {L(X)} is actually a {\mathcal{O}_X^\times}-gerbe and hence (iso classes of) line bundles on {X} are in correspondence with {H^1(X, \mathcal{O}_X^\times)}. Wait! We already knew that since {H^1(X, \mathcal{O}_X^\times)\simeq \text{Pic}(X)}.

We also found in our two examples of deformations before that this is true. We found that infinitesimal extensions by coherent sheaves are classified by {H^1(X, \mathcal{F}\otimes \mathcal{T})} here and here. It turns out this wasn’t a coincidence. Things are classified by {H^1} all over algebraic geometry and this is the underlying thread connecting them.

But it turns out Giraud didn’t stop there and we get even more. We actually get an obstruction theory as well. Giraud tells us that the obstruction to constructing a global object lies in {H^2(X, \mathcal{A})}. We may have not gone through it for our deformations with a tedious cocycle argument like the {H^1} exercises, but many books do go through this (see Hartshorne’s recent book on Deformation Theory). Good thing we didn’t go through it, we just get it from knowing that it is a gerbe.

I’ve tried to search for this, but can’t find it anywhere. This is why this theory is so cool and widespread. Think about that measure theory example from two times ago. If we knew that stack was an {\mathcal{A}}-gerbe for some {\mathcal{A}}, then we could use cohomology to determine whether certain measure theoretic constructions existed. I don’t think anyone has ever done this before. Knowing something is a gerbe is very powerful since it converts existence questions to cohomology computations.

Let’s get right to it now. Fix a sheaf of abelian (possibly not necessary) groups {\mathcal{A}} on {X}. Then an {\mathcal{A}}-gerbe is a gerbe {\mathcal{S}} on {X} such that for any open {U} on {X} we have a functorial isomorphism {\mathcal{A}(U)\stackrel{\sim}{\rightarrow} \text{Aut}(s)} for all {s\in \mathcal{S}(U)}.

Note that since {\mathcal{S}} is a stack, {\text{Aut}(s)} is a sheaf, so by isomorphism we mean an isomorphism as sheaves, and by functorial we mean given another object {t\in \mathcal{S}(U)}, the isomorphism commutes

\displaystyle \begin{matrix} \mathcal{A}(U) & \rightarrow & \text{Aut}(s) \\ Id \downarrow & & \downarrow \\ \mathcal{A}(U) & \rightarrow & \text{Aut}(t) \end{matrix}

In particular, we get that for any two objects {C, D\in \mathcal{S}(U)} we have that the sheaf {Isom(C,D)} is an {\mathcal{A}}-torsor. This gives that if there is some object over {U}, namely that {\mathcal{S}(U)\neq \emptyset}, then the set of isomorphism classes of obects in {\mathcal{S}(U)} is in natural bijection with {H^1(U, \mathcal{A}_U)}, as was pointed out in the motivation above.

One can form the {\mathit{classifying \ stack}} over {B} (the stacky version of a classifying space), {B\mathcal{A}} by taking {B\mathcal{A}(U)=\mathcal{T}ors(\mathcal{A}(U))}. So above an open set we get the category of {\mathcal{A}(U)}-torsors on {U}. A basic theorem about {\mathcal{A}}-gerbes is that an {\mathcal{A}}-gerbe, {\mathcal{S}}, is isomorphic to {B\mathcal{A}} if and only if {F(B)\neq \emptyset}. This says that {F} is isomorphic to the classifying stack if and only if it has a global object.

Gerbes 1: The Definition

Here’s a nice short definitional post. If you think that defining stacks and now we have even more definitions is just completely absurd, abstract, solipsism bear with me for just one more post. In the next post we’ll see what the point of all of this is. It is not just pointless abstraction. Figuring out something is a gerbe actually gives you an amazingly powerful tool to work with.
A gerbe is just a special type of stack. Let’s go back to thinking topologically, since if I have non-AG readers, this probably feels most comfortable. Consider a stack {\mathcal{S}} over {\text{Top}(X)}. For instance, line bundles on {X}.

So we get some stuff associated to every open set of {X}. Recall, we think of these as lying over these open sets. No part of the definition of stack guarantees that we must have things lying over open sets (i.e. the collection of things over a particular open set could be empty). The first condition for a stack to be a gerbe is that for any open set {U}, there is a covering {\{V_i\subset U\}} such that {\mathcal{S}(V_i)\neq \emptyset}. In other words, we can always shrink our open set to get an object over it.

Let’s think to our line bundle example. Check. We at least always have the trivial bundle.

The other condition for a stack to be a gerbe is that everything is locally isomorphic in the following sense, whenever we have two objects {\eta, \eta '} over some open set {U}, then there is some covering {\{V_i \subset U\}} such that we get {\eta|_{V_i} \stackrel{\sim}{\rightarrow} \eta '|_{V_i}}.

Let’s think to our line bundle example. Check. By definition we have local trivializations, which are all isomorphic. So {L(X)} is a gerbe. I didn’t do a good job at examples of non-stacks, but it might actually be useful to give examples of stacks that are not gerbes. The stack {M_g} that was briefly mentioned last post is not a gerbe (in fact, I haven’t really told you what I mean by {M_g}, and it turns out if you define the moduli space with respect to the Zariski topology {M_1} isn’t even a stack).

Again, most importantly for us the deformation stack (of a smooth scheme, {Z}) from last time is also a gerbe (mostly for the same reason as the bundle example, you have the trivial one and locally everything becomes the trivial one).

Stacks 3: Stacks on Sites

Today we’ll end the discussion on stacks for a bit. All we want to do is say what a stack on a general site is. But all of the pieces of this are already in place. We converted our topological space {X} into a site {\text{Top}(X)} as our first step and then only used properties of sites to define everything. It might have been easier to visualize things as actual coverings by open sets and things lying over open sets, but formally we always used the site language.
Let {\mathcal{C}} be a site. Then it is a category with a Grothendieck topology. Since it is a category, we know what it means to be fibered in groupoids over it. Let {\mathcal{S}} be a category fibered in groupoids over {\mathcal{C}}. Given an open set, {U}, (i.e. object in {\mathcal{C}}) and two objects {\eta} and {\eta '} over {U}, then we get a natural contravariant functor to Set, {\text{Isom}_{\eta, \eta'}}. If this functor (re: presheaf) is a sheaf, then {\mathcal{S}} is a prestack on {\mathcal{C}}.

A word should be said about “sheaf”. Recall that on a site, a sheaf is just a contravariant functor that also satisfies a particular exactness diagram {\displaystyle F(U)\rightarrow \prod_{i} F(U_i) \stackrel{\rightarrow}{\rightarrow} \prod_{i,j} F(U_i\times_U U_j)}. When it won’t cause confusion, I’ll probably just write an actual restriction or {\eta_{ij}} to mean the pullback since this is what most people will have in their heads anyway.

Lastly, a prestack is a stack if every descent datum is effect. Since we have a notion of covering built into our site, namely the Grothendieck topology specifies coverings, we can define a descent datum to be a collection of objects over each open set (object) in the covering along with isomorphisms that satisfy the cocycle condition. The descent datum is effective if there is an object over the open set (object) being covered that satisfy the same conditions as first defined.

For most of the time, if we have some scheme, {X}, floating around when we say stack we’ll mean stack on the Zariski site {X_{Zar}} or étale site {X_{et}}.

Now that we have what a stack is, we’ll just throw a bunch of examples out there. If one of them interests you, then you can actually check the details of whether or not it is a stack. The important point here is that they occur all over the place, and not just in algebraic geometry. Recall that one of the points of constructing the notion of stack was to get a “generalized space” in some sense, but since many of these examples are clearly not geometric, we’ll probably want to specify later some more conditions to get it to look more like a geometric space.

A sort of canonical example is to take the site of topological spaces, Top, and consider the category of arrows Cont. So Cont just consists of continuous maps. The functor that sends an arrow to its codomain fibers it in groupoids and one can check that Cont is a stack on Top.

Next, there is a way in which we can consider any sheaf a stack. Given a (separated) presheaf on some site {F:\mathcal{C}^{op}\rightarrow \text{Set}}, we get a category fibered in groupoids, which we’ll just denote {X\rightarrow \mathcal{C}}. Here {X} can in some sense be thought of as the espace étale of the presheaf as a category. It turns out that the presheaf is a sheaf if and only if the category fibered in groupoids associated to it is a stack. This just amounts to unraveling what each of those definitions are.

An immediate corollary to the above is that any scheme is a stack via its functor of points and hence stacks really are generalizations of spaces.

The category of quasi-coherent sheaves on a scheme {X_{Zar}} is a stack.

Most examples of moduli spaces are stacks (for instance {M_g}, the moduli space of curves of genus {g}).

A very important example for us is that the so-called Schlessinger deformation functor is a stack. Suppose we have some fixed scheme {Z} over {A}. Then {\text{Def}_Z(A')} is the set of (cartesian) diagrams that give deformations of {Z} over {Spec(A')}.

To prove my point that stacks come up all over the place, we’ve already talked about how they appear in differential geometry as bundles. A place where they may show up in anaylsis is to consider the category of (Radon?) measures on {\text{Top}(X)} in the same way as the vector bundle example. It consists of pairs {(U, \mu)} where {\mu} turns {U} as a subspace into a measure space. The morphisms are “isos” after restriction, so {(U, \mu)\stackrel{f}{\rightarrow} (V, \rho)} is a morphism if we have an automorphism {f:V\stackrel{\sim}{\rightarrow} V}, such that {f_{\sharp} \rho |_U = \mu}. This category has a natural forgetful fucntor to {\text{Top}(X)} the same way that {\text{Vect}^r(X)} did. I was talking to someone who does analysis to see if this really was a stack, and we decided it probably was, but we kept not understanding eachother’s language and so we aren’t sure. It would be interesting to see if it really is.

Lastly, since the point of this was to eventually get to groupoids I won’t talk anymore about stacks and all the various ways to think about them and all the extra conditions you can impose to get more rigid spaces. But a few words should be said about some of the major things I’ve left out and maybe later I’ll come back and talk more about them.

The collection of stacks actually forms a category (or better yet, a 2-category if you know what that is). So we maybe should have specified what the morphisms between them are. There is a beautiful way to think about stacks that involves forming the category of descent data. So the descent data we talked about actually forms a category which some people actually use to define what a stack is.

All the examples listed here are proven to be stacks in detail except the deformation example in Vistoli’s article in Fundamental Algebraic Geometry (aka FGA Explained) by Fantechi et al if you’re curious about seeing details. The deformation stack is proved in the article Beyond Schlessinger: Deformation Stacks by Brian Osserman available at his website. When it comes up later when talking about gerbes, I might explain it more thoroughly and prove it as well.

Stacks 2: An example

This will hopefully be a short, yet enlightening post in which the concept of a stack starts to make more sense than the abstract nonsense of the last few posts. Recall that we formed a category of line bundles on a manifold {L(X)} and had a natural forgetful functor: {L(X)\rightarrow \text{Top}(X)}.

If one is not writing a blog and wants to be much more careful, one should probably check that {L(X)} is indeed a category and the given functor is actually a functor. Most readers that have made it this far probably aren’t concerned with this part, though.

Is this fibered in groupoids? Well for checking what sorts of things lie over certain objects {U\in \text{Top}(X)}, the situation has been rigged so that the objects over {U} are precisely the line bundles on {U} as a manifold. The first type of “square” we have to be able to complete is as follows {\begin{matrix} & & (V, L_V) \\ \\ U & \hookrightarrow & V \end{matrix}}. Well, all we need to be able to do is find some {(U, L_U) \rightarrow (V, L_V)}. But by definition of our category this would consist of an iso {L_V|_U \rightarrow L_U}. These are line bundles, so we can always restrict to get another one, so just take {(U, L_V|_U)} lying over {U} to complete it.

What about the second diagram of being fibered in groupoids? The base is just {U\hookrightarrow V \hookrightarrow W}. Now suppose we have line bundles over these {L_U, L_V,} and {L_W} and isomorphisms {L_W|_V\rightarrow L_V} and {L_W|_U\rightarrow L_U}. This certainly tells us that there is an iso {L_V|_U\rightarrow L_U}, and uniqueness is just from the fact that it has to be the one that makes the composition what we said it had to be. You could think of this coming from the fact that {L_V|_U\rightarrow L_U} is unique up to automorphism of {L_U}, and we know which automorphism it from the other condition.

Now we check that Isom forms a sheaf. Let {U} be some open set. Let {L} and {S} be two line bundles over {U} (in this case, this literally just means line bundles on {U} as a topological space). Now we want to check that the presheaf (of sets) {\mathcal{F}(V)=\{L_V\stackrel{\sim}{\rightarrow} S_V\}} is actually a sheaf. This is a presheaf just because isomorphisms restrict. It is a sheaf because all the information is local. If you have isomorphisms defined on open subsets that agree on overlaps, then you can glue them to make an isomorphism on the union. These are just two basic properties of line bundles that most people have already seen. So Isom is a sheaf.

Lastly we need to check the stack condition. Maybe I should remark on the terminology here. A collection of objects and isos over a covering of an open set that satisfies the cocycle condition is called a descent datum. If the objects glue in the way of the stack condition, then that descent datum is said to be effective, so the stack condition is sometimes stated that every descent datum is effective.

Given a descent datum, the fact that you can glue to get an object over the whole open set is just a standard exercise or proven proposition in basically any text on manifolds. In fact all of the above things are true for any rank {r} vector bundle. So we actually get the stack of rank {r} vector bundles on {X} for any {r}. Since I’m not sure we’ll return to this example, we’ll just temporarily notate it {\text{Vect}^r(X)}, and hence {\text{Vect}^1(X)=L(X)}.

If you’ve been following along, it should be pretty clear how to translate all of this over to a stack on the Zariski site rather than on {\text{Top}(X)}, but we’ll make that more explicit next time and get some more examples.