Gauss’ Law

Since my blog claims to talk about physics sometimes and I just finished teaching multivariable calculus, I thought I’d do a post on one form of Gauss’ law. As a teacher of the course, I found this to be an astonishingly beautiful “application” of the divergence theorem. It turned out to be a touch too difficult for my students (and I vaguely recall being extremely confused about this when I took the class myself).

First, I’ll remind you what some of this stuff is if you haven’t thought about these concepts for awhile. Let’s work in {\mathbb{R}^3} for simplicity. Consider some subset {U\subset \mathbb{R}^3}. Let {F: U\rightarrow \mathbb{R}^3} be a vector field. Mathematically this is just assigning a vector to each point of {U}. For calculus we usually put some fairly restrictive conditions on {F}, such as all partial derivatives exist and are continuous.

The above situation is ubiquitous in classical physics. The vector field could be the gravitational field or the electric field or it could describe velocity of a flowing fluid or … One key quantity you might want to know about your field is what is the flux of the field through a given surface {S}? This measures the net change of the field flowing through the surface. If {S} is just a sphere, then it is easy to visualize the flux as the amount leaving the sphere minus the amount flowing in.

Let’s suppose {S} is a smooth surface bounding a solid volume {E} (e.g. the sphere bounding the solid ball). In this case we have a well-defined “outward normal” direction. Define {\mathbf{n}} to be the unit vector field in this direction at all points of {S}. Just by definition the flux of {F} through {S} must be “adding up” the values of {F\cdot \mathbf{n}} over {S}, because this dot product just tells us how much {F} is pointing in the outward direction.

Thus we define the flux (using Stewart’s notation) to be:

\displaystyle \iint_S F\cdot d\mathbf{S} := \iint_S F\cdot \mathbf{n} \,dS

Note the second integral is integrating a scalar valued function with respect to surface area “dS.” Now recall that the divergence theorem says that in our situation (given that {F} extends to a vector field on an open set containing {E}) we can calculate this rather tedious surface integral by converting it to a usual triple integral:

\displaystyle \iint_S F\cdot d\mathbf{S} = \iiint_E div(F) \,dV

If you’re advanced, then of course you could just work this out as a special case of Stoke’s theorem using the musical isomorphisms and so on. Let’s now return to our original problem. Suppose I have a charge {Q} inside some surface {S} and I want to compute the flux of the associated electric field through {S}.

From my given information this would seem absolutely impossible. If {S} can be anything, and {Q} can be located anywhere inside, then of course there are just way too many variables to come up with a reasonably succinct answer. Surprisingly, Gauss’ law tells us that no matter what {S} is and where {Q} is located, the answer is always the same, and it is just a quick application of the divergence theorem to prove it.

First, let’s translate everything so that {Q} is located at the origin. Since flux is translation invariant, this will not change our answer. We first need to know what the electric field is, and this is essentially a direct consequence of Coloumb’s law:

\displaystyle F(x,y,z)=\frac{kQ}{(x^2+y^2+z^2)^{3/2}}\langle x, y, z\rangle

If we care about higher dimensions, then we might want to note that the value only depends on the radial distance from the origin and write it in the more succinct way {\displaystyle F(r)=\frac{kQ}{|r|^3}r}, where {k} is just some constant that depends on the textbook/units you are working in. Let’s first compute the partial of the first coordinate with respect to {x} (ignoring the constant factor for now):

\displaystyle \frac{\partial}{\partial x}\left(\frac{x}{(x^2+y^2+z^2)^{3/2}}\right) = \frac{-2x^2+y^2+z^2}{(x^2+y^2+z^2)^2}

You get similar things for taking the other derivatives involved in the divergence except the minus sign moves to {-2y^2} and {-2z^2} respectively. When you add all these together you get in the numerator {-2x^2-2y^2-2z^2+2x^2+2y^2+2z^2=0}. Thus the divergence is {0} everywhere and hence by the divergence theorem the flux must be {0} too, right? Wrong! And that’s where I lost most of my students.

Recall that pesky hypothesis that {F} can be extended to a vector field on an open neighborhood of {E}. Our {F} can’t even be defined at all to extend continuously across the origin. Thus we must do something different. Here’s the idea, we just change our region {E}. Since {E} is open and contains the origin, we can find a small sphere of radius {\varepsilon>0} and centered at {(0,0,0)} whose interior is properly contained in {E}, say {S_\varepsilon}.

Let {\Omega} be the region between these two surfaces. Effectively this “cuts out” the bad point of {F} and now we are allowed to apply the divergence theorem to {\Omega} where our new boundary is {S} oriented outwards and {S_\varepsilon} oriented inward (negatively). We already calculated that {div F=0}, thus one side of the equation is {0}. This gives us

\displaystyle \iint_S F\cdot d\mathbf{S} = \iint_{S_\varepsilon} F\cdot d\mathbf{S}

This is odd, because it says that no matter how bizarre or gigantic {S} was we can just compute the flux through a small sphere and get the same answer. At this point we’ve converted the problem to something we can do because the unit normal is just {\mathbf{n}=\frac{1}{\sqrt{x^2+y^2+z^2}}\langle x, y, z\rangle}. Direct computation gives us

\displaystyle F\cdot \mathbf{n} = \frac{kQ (x^2+y^2+z^2)}{(x^2+y^2+z^2)^3}=\frac{kQ}{(x^2+y^2+z^2)^2}

Plugging this all in we get that the flux through {S} is

\displaystyle \iint_{S_\varepsilon} \frac{kQ}{\varepsilon^2} \,dS = \frac{kQ}{\varepsilon^2}Area(S_\varepsilon) = 4\pi k Q.

That’s Gauss’ Law. It says that no matter the shape of {S} or the location of the charge inside {S}, you can always compute the flux of the electric field produced by {Q} through {S} as a constant multiple of the amount of charge! In fact, most books use k=1/(4\pi \varepsilon_0) where $\varepsilon_0$ is the “permittivity of free space” which kills off practically all extraneous symbols in the answer.


An Application of p-adic Volume to Minimal Models

Today I’ll sketch a proof of Ito that birational smooth minimal models have all of their Hodge numbers exactly the same. It uses the {p}-adic integration from last time plus one piece of heavy machinery.

First, the piece of heavy machinery: If {X, Y} are finite type schemes over the ring of integers {\mathcal{O}_K} of a number field whose generic fibers are smooth and proper, then if {|X(\mathcal{O}_K/\mathfrak{p})|=|Y(\mathcal{O}_K/\mathfrak{p})|} for all but finitely many prime ideals, {\mathfrak{p}}, then the generic fibers {X_\eta} and {Y_\eta} have the same Hodge numbers.

If you’ve seen these types of hypotheses before, then there’s an obvious set of theorems that will probably be used to prove this (Chebotarev + Hodge-Tate decomposition + Weil conjectures). Let’s first restrict our attention to a single prime. Since we will be able to throw out bad primes, suppose we have {X, Y} smooth, proper varieties over {\mathbb{F}_q} of characteristic {p}.

Proposition: If {|X(\mathbb{F}_{q^r})|=|Y(\mathbb{F}_{q^r})|} for all {r}, then {X} and {Y} have the same {\ell}-adic Betti numbers.

This is a basic exercise in using the Weil conjectures. First, {X} and {Y} clearly have the same Zeta functions, because the Zeta function is defined entirely by the number of points over {\mathbb{F}_{q^r}}. But the Zeta function decomposes

\displaystyle Z(X,t)=\frac{P_1(t)\cdots P_{2n-1}(t)}{P_0(t)\cdots P_{2n}(t)}

where {P_i} is the characteristic polynomial of Frobenius acting on {H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)}. The Weil conjectures tell us we can recover the {P_i(t)} if we know the Zeta function. But now

\displaystyle \dim H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)=\deg P_i(t)=H^i(Y_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)

and hence the Betti numbers are the same. Now let’s go back and notice the magic of {\ell}-adic cohomology. If {X} and {Y} are as before over the ring of integers of a number field. Our assumption about the number of points over finite fields being the same for all but finitely many primes implies that we can pick a prime of good reduction and get that the {\ell}-adic Betti numbers of the reductions are the same {b_i(X_p)=b_i(Y_p)}.

One of the main purposes of {\ell}-adic cohomology is that it is “topological.” By smooth, proper base change we get that the {\ell}-adic Betti numbers of the geometric generic fibers are the same

\displaystyle b_i(X_{\overline{\eta}})=b_i(X_p)=b_i(Y_p)=b_i(Y_{\overline{\eta}}).

By the standard characteristic {0} comparison theorem we then get that the singular cohomology is the same when base changing to {\mathbb{C}}, i.e.

\displaystyle \dim H^i(X_\eta\otimes \mathbb{C}, \mathbb{Q})=\dim H^i(Y_\eta \otimes \mathbb{C}, \mathbb{Q}).

Now we use the Chebotarev density theorem. The Galois representations on each cohomology have the same traces of Frobenius for all but finitely many primes by assumption and hence the semisimplifications of these Galois representations are the same everywhere! Lastly, these Galois representations are coming from smooth, proper varieties and hence the representations are Hodge-Tate. You can now read the Hodge numbers off of the Hodge-Tate decomposition of the semisimplification and hence the two generic fibers have the same Hodge numbers.

Alright, in some sense that was the “uninteresting” part, because it just uses a bunch of machines and is a known fact (there’s also a lot of stuff to fill in to the above sketch to finish the argument). Here’s the application of {p}-adic integration.

Suppose {X} and {Y} are smooth birational minimal models over {\mathbb{C}} (for simplicity we’ll assume they are Calabi-Yau, Ito shows how to get around not necessarily having a non-vanishing top form). I’ll just sketch this part as well, since there are some subtleties with making sure you don’t mess up too much in the process. We can “spread out” our varieties to get our setup in the beginning. Namely, there are proper models over some {\mathcal{O}_K} (of course they aren’t smooth anymore), where the base change of the generic fibers are isomorphic to our original varieties.

By standard birational geometry arguments, there is some big open locus (the complement has codimension greater than {2}) where these are isomorphic and this descends to our model as well. Now we are almost there. We have an etale isomorphism {U\rightarrow V} over all but finitely many primes. If we choose nowhere vanishing top forms on the models, then the restrictions to the fibers are {p}-adic volume forms.

But our standard trick works again here. The isomorphism {U\rightarrow V} pulls back the volume form on {Y} to a volume form on {X} over all but finitely primes and hence they differ by a function which has {p}-adic valuation {1} everywhere. Thus the two models have the same volume over all but finitely many primes, and as was pointed out last time the two must have the same number of {\mathbb{F}_{q^r}}-valued points over these primes since we can read this off from knowing the volume.

The machinery says that we can now conclude the two smooth birational minimal models have the same Hodge numbers. I thought that was a pretty cool and unexpected application of this idea of {p}-adic volume. It is the only one I know of. I’d be interested if anyone knows of any other.

Volumes of p-adic Schemes

I came across this idea a long time ago, but I needed the result that uses it in its proof again, so I was curious about figuring out what in the world is going on. It turns out that you can make “{p}-adic measures” to integrate against on algebraic varieties. This is a pretty cool idea that I never would have guessed possible. I mean, maybe complex varieties or something, but over {p}-adic fields?

Let’s start with a pretty standard setup in {p}-adic geometry. Let {K/\mathbb{Q}_p} be a finite extension and {R} the ring of integers of {K}. Let {\mathbb{F}_q=R_K/\mathfrak{m}} be the residue field. If this scares you, then just take {K=\mathbb{Q}_p} and {R=\mathbb{Z}_p}.

Now let {X\rightarrow Spec(R)} be a smooth scheme of relative dimension {n}. The picture to have in mind here is some smooth {n}-dimensional variety over a finite field {X_0} as the closed fiber and a smooth characteristic {0} version of this variety, {X_\eta}, as the generic fiber. This scheme is just interpolating between the two.

Now suppose we have an {n}-form {\omega\in H^0(X, \Omega_{X/R}^n)}. We want to say what it means to integrate against this form. Let {|\cdot |_p} be the normalized {p}-adic valuation on {K}. We want to consider the {p}-adic topology on the set of {R}-valued points {X(R)}. This can be a little weird if you haven’t done it before. It is a totally disconnected, compact space.

The idea for the definition is the exact naive way of converting the definition from a manifold to this setting. Consider some point {s\in X(R)}. Locally in the {p}-adic topology we can find a “disk” containing {s}. This means there is some open {U} about {s} together with a {p}-adic analytic isomorphism {U\rightarrow V\subset R^n} to some open.

In the usual way, we now have a choice of local coordinates {x=(x_i)}. This means we can write {\omega|_U=fdx_1\wedge\cdots \wedge dx_n} where {f} is a {p}-adic analytic on {V}. Now we just define

\displaystyle \int_U \omega= \int_V |f(x)|_p dx_1 \cdots dx_n.

Now maybe it looks like we’ve converted this to another weird {p}-adic integration problem that we don’t know how to do, but we the right hand side makes sense because {R^n} is a compact topological group so we integrate with respect to the normalized Haar measure. Now we’re done, because modulo standard arguments that everything patches together we can define {\int_X \omega} in terms of these local patches (the reason for being able to patch without bump functions will be clear in a moment, but roughly on overlaps the form will differ by a unit with valuation {1}).

This allows us to define a “volume form” for smooth {p}-adic schemes. We will call an {n}-form a volume form if it is nowhere vanishing (i.e. it trivializes {\Omega^n}). You might be scared that the volume you get by integrating isn’t well-defined. After all, on a real manifold you can just scale a non-vanishing {n}-form to get another one, but the integral will be scaled by that constant.

We’re in luck here, because if {\omega} and {\omega'} are both volume forms, then there is some non-vanishing function such that {\omega=f\omega'}. Since {f} is never {0}, it is invertible, and hence is a unit. This means {|f(x)|_p=1}, so since we can only get other volume forms by scaling by a function with {p}-adic valuation {1} everywhere the volume is a well-defined notion under this definition! (A priori, there could be a bunch of “different” forms, though).

It turns out to actually be a really useful notion as well. If we want to compute the volume of {X/R}, then there is a natural way to do it with our set-up. Consider the reduction mod {\mathfrak{m}} map {\phi: X(R)\rightarrow X(\mathbb{F}_q)}. The fiber over any point is a {p}-adic open set, and they partition {X(R)} into a disjoint union of {|X(\mathbb{F}_q)|} mutually isomorphic sets (recall the reduction map is surjective here by the relevant variant on Hensel’s lemma). Fix one point {x_0\in X(\mathbb{F}_q)}, and define {U:=\phi^{-1}(x_0)}. Then by the above analysis we get

\displaystyle Vol(X)=\int_X \omega=|X(\mathbb{F}_q)|\int_{U}\omega

All we have to do is compute this integral over one open now. By our smoothness hypothesis, we can find a regular system of parameters {x_1, \ldots, x_n\in \mathcal{O}_{X, x_0}}. This is a legitimate choice of coordinates because they define a {p}-adic analytic isomorphism with {\mathfrak{m}^n\subset R^n}.

Now we use the same silly trick as before. Suppose {\omega=fdx_1\wedge \cdots \wedge dx_n}, then since {\omega} is a volume form, {f} can’t vanish and hence {|f(x)|_p=1} on {U}. Thus

\displaystyle \int_{U}\omega=\int_{\mathfrak{m}^n}dx_1\cdots dx_n=\frac{1}{q^n}

This tells us that no matter what {X/R} is, if there is a volume form (which often there isn’t), then the volume

\displaystyle Vol(X)=\frac{|X(\mathbb{F}_q)|}{q^n}

just suitably multiplies the number of {\mathbb{F}_q}-rational points there are by a factor dependent on the size of the residue field and the dimension of {X}. Next time we’ll talk about the one place I know of that this has been a really useful idea.

Classical (Lagrangian) Mechanics

It turns out that because I work with Calabi-Yau varieties I often encounter various ideas and terms from physics. In particular, quantum field theory is a something that comes up a lot. I took a lot of physics as an undergrad, and I’ve pieced together a tiny bit about what is meant by “quantum field theory.” In order to record this somewhere before I forget it, I’m going to blog some stuff. This should be a very short series, because I don’t want to get hung up on it.

The main point is to try to express the idea of quantum field theory in a way a mathematician would understand it. Before we can do that I need to spend a post on classical mechanics. This post is going to present what is done over the course of a semester long undergrad class, so it will go fast. I’ll give you the take away up front. In a mathematically rigorous way one can prove that the “Lagrangian formalism” we’ll look at soon is exactly equivalent to Newton’s law {F=ma}.

Suppose we have some particle in space. If it is moving, that motion has something called kinetic energy. For simplicity, we’ll call this a function of time {K(t)=\frac{1}{2}mv(t)^2}. The formula isn’t important here. Usually you also have something called potential energy. For example, a ball is on a table. There is the potential energy of falling to the ground. Technically you can figure out the potential the same way you’d find the potential of any vector field (this took me awhile to connect as an undergrad).

Suppose your particle is moving in {\mathbb{R}^n}, then we can describe it by a function {q: \mathbb{R}\rightarrow\mathbb{R}^n}. There could be some ambient force (gravity, electromagnetic field, etc doesn’t matter). This is a vector field {F: \mathbb{R}^n\rightarrow \mathbb{R}^n}. The potential then is just a function {V: \mathbb{R}^n\rightarrow \mathbb{R}} such that {\nabla F=V}. This is what we tell our calculus students, so it shouldn’t be surprising. Of course, we must assume our force is conservative for a potential to exist, so we do that.

Now we define {L=K(t)-V(q(t))}. This is called the Lagrangian. We define the action over some path {q:[t_0, t_1]\rightarrow \mathbb{R}^n} to be the integral {S(q)=\int_{t_0}^{t_1}L(t)dt}. Now we get to the point. If we let our paths vary, then we get a bunch of real numbers by evaluating {S(q)}. From standard calculus we could find the minimum. This is the path of least action, and our particle will follow that path if and only if in our system Newton’s law {F=ma} holds.

We could go off and try to describe physically why one would think of this weird formalism. For example, integrating force over distance is the work needed to move the particle from point {a} to point {b}. We would expect that the particle will naturally follow the path that requires the least work. This has roughly the same flavor, but takes into account some extra stuff. Whatever the physical reason, it shouldn’t really matter to us, because it is exactly equivalent to the law we all know ought to be true.

In a classical mechanics class you’d probably take many weeks now just being handed various scenarios where you figure out the Lagrangian, and then given some inital starting point figure out the path by taking the derivative {\delta S(q)} and setting it to {0} and solving. Note: for practical purposes this is a little tricky, because the so-called variation of the action involves differentiating with respect to paths. Since we aren’t computing these, we won’t go through this, but the idea of how to do this is to just parametrize your paths in some nice way (think about a smooth {1}-parameter homotopy connecting them for the picture).

Now we must generalize a bit. Suppose we have some physical system (maybe a double pendulum for sufficient complicatedness). There’s more than just one particle, and there are constraints for how things can move in relation to eachother. What we do now is consider the space of all configurations. Think of this as the moduli space of all positions the system could ever take. A point in this space {Q} is one particular configuration. Now a path {q:[t_0, t_1]\rightarrow Q} is just a description of how the system evolves over that time period. This configuration space we assume is a smooth manifold.

This means the velocity, which is the time derivative {\dot{q}(t)\in T_{q(t)}Q} is actually a tangent vector now (it was before, but we just made the canonical identification). Let’s pick a starting point and ending point {a, b\in Q}. Then we can formalize what we did last time as follows. Define {\Gamma=\{q:[t_0, t_1]\rightarrow Q : q(t_0)=a, \ q(t_1)=b\}} to be the path space (of smooth paths) with those endpoints. Let {L: TQ\rightarrow \mathbb{R}} be a smooth function called the Lagrangian of the system.

Now the action is {S:\Gamma\rightarrow \mathbb{R}} defined by {S(q)=\int_{t_0}^{t_1}L(q, \dot{q})dt}. The path that our system will take in the configuration space will be a minimum of {S}. Thus to find it we just solve {\delta S=0}.

In order to test whether you follow this, a really quick (if you get it, but painful if you don’t) and wonderful exercise is to figure out the equation of motion of a single free particle in {\mathbb{R}^3}. What does this mean? Well, there is no force in the system at all, so the potential is {0}, and hence {L=K(t)=\frac{1}{2}mv(t)\cdot v(t)}. We already know the answer. No force means no acceleration. Thus from basic calculus the answer is that velocity is a constant {v_0} and the path is {q(t)=a+v_0t} where {a} is the initial starting point. Try to get that using the Lagrangian!

Classical Local Systems

I lied to you a little. I may not get into the arithmetic stuff quite yet. I’m going to talk about some “classical” things in modern language. In the things I’ve been reading lately, these ideas seem to be implicit in everything said. I can’t find this explained thoroughly anywhere. Eventually I want to understand how monodromy relates to bad reduction in the {p}-adic setting. So we’ll start today with the different viewpoints of a local system in the classical sense that are constantly switched between without ever being explained.

You may need to briefly recall the old posts on connections. The goal for the day is to relate the three equivalent notions of a local system, a vector bundle plus flat connection on it, and a representation of the fundamental group. There may be some inaccuracies in this post, because I can’t really find this written anywhere and I don’t fully understand it (that’s why I’m making this post!).

Since I said we’d work in the “classical” setting, let’s just suppose we have a nice smooth variety over the complex numbers, {X}. In this sense, we can actually think about it as a smooth manifold, or complex analytic space. If you want, you can have the picture of a Riemann surface in your head, since the next post will reduce us to that situation.

Suppose we have a vector bundle on {X}, say {E}, together with a connection {\nabla : E\rightarrow E\otimes \Omega^1}. We’ll fix a basepoint {p\in X} that will always secretly be lurking in the background. Let’s try to relate this this connection to a representation of the fundamental group. Well, if we look at some old posts we’ll recall that a choice of connection is exactly the same data as telling you “parallel transport”. So what this means is that if I have some path on {X} it tells me how a vector in the fiber of the vector bundle moves from the starting point to the ending point.

Remember, that we fixed some basepoint {p} already. So if I take some loop based at {p} say {\sigma}, then a vector {V\in E_p} can be transported around that loop to give me another vector {\sigma(V)\in E_p}. If my vector bundle is rank {n}, then {E_p} is just an {n}-dimensional vector space and I’ve now told you an action of the loop space based at {p} on this vector space.

Visualization of a vector being transported around a loop on a torus (yes, I’m horrible at graphics, and I couldn’t even figure out how to label the other vector at p as \sigma (V)):

This doesn’t quite give me a representation of the fundamental group (based at {p}), since we can’t pass to the quotient, i.e. the transport of the vector around a loop that is homotopic to {0} might be non-trivial. We are saved if we started with a flat connection. It can be checked that the flatness assumption gives a trivial action around nullhomotopic loops. Thus the parallel transport only depends on homotopy classes of loops, and we get a group homomorphism {\pi_1(X, p)\rightarrow GL_n(E_p)}.

Modulo a few details, the above process can essentially be reversed, and hence given a representation you can produce a unique pair {(E,\nabla)}, a vector bundle plus flat connection associated to it. This relates the latter two ideas I started with. The one that gave me the most trouble was how local systems fit into the picture. A local system is just a locally constant sheaf of {n}-dimensional vector spaces. At first it didn’t seem likely that the data of a local system should be equivalent to these other two things, since the sheaf is locally constant. This seems like no data at all to work with rather than an entire vector bundle plus flat connection.

Here is why algebraically there is good motivation to believe this. Recall that one can think of a connection as essentially a generalization of a derivative. It is just something that satisfies the Leibniz rule on sections. Recall that we call a section, {s}, horizontal for the connection if {\nabla (s)=0}. But if this is the derivative, this just means that the section should be constant. In this analogy, we see that if we pick a vector bundle plus flat connection, we can form a local system, namely the horizontal sections (which are the locally constant functions). If you want an exercise to see that the analogy is actually a special case, take the vector bundle to be the globally trivial line bundle {\mathcal{O}_X} and the connection to be the honest exterior derivative {d:\mathcal{O}_X\rightarrow \Omega^1}.

The process can be reversed again, and given any locally constant sheaf of vector spaces, you can cook up a vector bundle and flat connection whose horizontal sections are precisely the sections of the sheaf. Thus our three seemingly different notions are actually all equivalent. I should point out that part of my oversight on the local system side was thinking that a locally constant sheaf somehow doesn’t contain much information. Recall that it is still a sheaf, so we can be associating lots of information on large open sets and we still have restriction homomorphisms giving data as well. Next time we’ll talk about some classical theorems in differential equation theory that are most easily proved and stated in this framework.

Mirror Symmetry A-branes

I started writing this post this past weekend, but got stuck really quickly and then kept putting it off. I don’t want to leave anyone following this hanging with no idea what the A-model is. This is harder for me to describe than the A-model for some reason. Mostly I’m running into the problem of either just saying what the A-side is without explanation or I’m getting too bogged down in details. Both seem bad. In conclusion, I think I’ll err on the side of too few details, and then hopefully make sense of what is going on by completely describing mirror symmetry in the easiest case possible: the one dimensional case, i.e. for an elliptic curve.

I’m going to semi-cheat right off and refer to posts over a year old. Recall what a symplectic form is on a smooth manifold is. It is just a closed non-degenerate 2-form. A smooth manifold plus symplectic form is called a symplectic manifold. The cotangent bundle always has a canonical symplectic form on it. An example that may be less well-known is that any smooth complex projective variety is symplectic because the Fubini-Study Kähler form on {\mathbb{P}^n} restricts to a symplectic form.

If we just think about vector spaces for a second, then given a symplectic form, we say that a subspace {S} is isotropic if {S\subset S^\perp} and coisotropic if {S^\perp \subset S}. The subspace is Lagrangian if it is both isotropic and coisotropic. This extends to manifold language easily by saying an embedded submanifold {S\subset M} is Lagrangian if the tangent subspace {T_sS\subset T_sM} is Lagrangian for every point of {S}. If you want to get used to these definitions, a quick exercise would be to check that the zero section of the cotangent bundle is Lagrangian with respect to the canonical symplectic structure.

My second semi-cheat is to ask you to recall the definition of an almost complex structure from close to two years ago. The way to think about it is that it is a bundle map {J: TM \rightarrow TM} that behaves similarly to “multiplication by {i}“. The condition is that {J^2=-Id}, and indeed multiplication by {i} when identifying {\mathbb{R}^2\simeq \mathbb{C}} gives an example of an almost complex structure. In fact, since we’ll always work over {\mathbb{C}}, any complex manifold does have multiplication by {i} as a natural almost complex structure.

It is possible that all these things are related by the following. Suppose {(M, \omega)} is a symplectic manifold, {J} an almost complex structre, and {g} a Riemannian metric. These three structures are called compatible if {\omega(J(-), -)=\langle - , -\rangle_g}. I am far out of my depth here, but I’m pretty sure such a manifold is called Kähler if this happens, but maybe some slight more conditions are needed (e.g. does this automatically imply that {g} is Hermitian? If so, then this is definitely what people call Kähler).

Now for the definition of the A-model. Let {(M, \omega)} be a Kähler (in the sense of the previous paragraph) manifold. We define the Fukaya category {Fuk(M)} to have as objects the Lagrangian submanifolds. The morphisms require a bit of technicality to define, but essentially are a way to intersect the submanifolds. It involves all the structures above and is called Floer cohomology. Recall that we’re merely sketching an idea here! Somehow this should be an {A_\infty} or dg-category if you remember from last time, and this just comes from the fact that the morphisms have to do with cohomology classes of intersections.

If you’ve been following this at all, then you should be in utter amazement. We can state mirror symmetry now as an equivalence of {A_\infty} categories {D^b(X)\rightarrow Fuk(\widehat{X})} where {X} is a Calabi-Yau. Why is this amazing (for those not following along)? Look at the left side of this equivalence. The bounded derived category of coherent sheaves (in the Zariski topology!!) on {X} is something that has to do purely with the algebraic data of {X}. I mean, the Zariski topology is algebraic, the definition of coherent is very algebraic, the construction of the derived category is algebraic, etc.

The right hand side seems to have forgotten all of the algebraic data. You forget that it is a variety and instead think of it as a smooth manifold. You consider a bunch of structure that helps you study the smooth structure. You consider Lagrangian submanifolds. The Fukaya category is almost entirely analytic in nature. But now the conjecture of Kontsevich mirror symmetry is that the two are always equivalent. That’s it for today. There should be one more post in this series in which I try to sketch the conjecture in the case of an elliptic curve.

Stacks 2: An example

This will hopefully be a short, yet enlightening post in which the concept of a stack starts to make more sense than the abstract nonsense of the last few posts. Recall that we formed a category of line bundles on a manifold {L(X)} and had a natural forgetful functor: {L(X)\rightarrow \text{Top}(X)}.

If one is not writing a blog and wants to be much more careful, one should probably check that {L(X)} is indeed a category and the given functor is actually a functor. Most readers that have made it this far probably aren’t concerned with this part, though.

Is this fibered in groupoids? Well for checking what sorts of things lie over certain objects {U\in \text{Top}(X)}, the situation has been rigged so that the objects over {U} are precisely the line bundles on {U} as a manifold. The first type of “square” we have to be able to complete is as follows {\begin{matrix} & & (V, L_V) \\ \\ U & \hookrightarrow & V \end{matrix}}. Well, all we need to be able to do is find some {(U, L_U) \rightarrow (V, L_V)}. But by definition of our category this would consist of an iso {L_V|_U \rightarrow L_U}. These are line bundles, so we can always restrict to get another one, so just take {(U, L_V|_U)} lying over {U} to complete it.

What about the second diagram of being fibered in groupoids? The base is just {U\hookrightarrow V \hookrightarrow W}. Now suppose we have line bundles over these {L_U, L_V,} and {L_W} and isomorphisms {L_W|_V\rightarrow L_V} and {L_W|_U\rightarrow L_U}. This certainly tells us that there is an iso {L_V|_U\rightarrow L_U}, and uniqueness is just from the fact that it has to be the one that makes the composition what we said it had to be. You could think of this coming from the fact that {L_V|_U\rightarrow L_U} is unique up to automorphism of {L_U}, and we know which automorphism it from the other condition.

Now we check that Isom forms a sheaf. Let {U} be some open set. Let {L} and {S} be two line bundles over {U} (in this case, this literally just means line bundles on {U} as a topological space). Now we want to check that the presheaf (of sets) {\mathcal{F}(V)=\{L_V\stackrel{\sim}{\rightarrow} S_V\}} is actually a sheaf. This is a presheaf just because isomorphisms restrict. It is a sheaf because all the information is local. If you have isomorphisms defined on open subsets that agree on overlaps, then you can glue them to make an isomorphism on the union. These are just two basic properties of line bundles that most people have already seen. So Isom is a sheaf.

Lastly we need to check the stack condition. Maybe I should remark on the terminology here. A collection of objects and isos over a covering of an open set that satisfies the cocycle condition is called a descent datum. If the objects glue in the way of the stack condition, then that descent datum is said to be effective, so the stack condition is sometimes stated that every descent datum is effective.

Given a descent datum, the fact that you can glue to get an object over the whole open set is just a standard exercise or proven proposition in basically any text on manifolds. In fact all of the above things are true for any rank {r} vector bundle. So we actually get the stack of rank {r} vector bundles on {X} for any {r}. Since I’m not sure we’ll return to this example, we’ll just temporarily notate it {\text{Vect}^r(X)}, and hence {\text{Vect}^1(X)=L(X)}.

If you’ve been following along, it should be pretty clear how to translate all of this over to a stack on the Zariski site rather than on {\text{Top}(X)}, but we’ll make that more explicit next time and get some more examples.