Projective Modules over Dedekind Domains

Today we’ll classify finitely generated projective modules over Dedekind domains which will finish off a very similar structure theorem to the one over a PID. First, we need an approximation theorem. Fix a Dedekind domain {R}. If we specify an order of vanishing of {n_i} at finitely many primes {\frak{p}_i}, then we can find an element {x\in Frac(R)} with the property that {ord_\frak{p_i}(x)=n_i} and non-negative otherwise.

We can reduce to the case of finiding an element where all the orders of vanishing are positve, since we can divide in {Frac(R)} to get the negative terms. By the Chinese Remainder Theorem the map {R\rightarrow R/\frak{p}_1^{n_1+1}\times \cdots \times R/\frak{p}_m^{n_m+1}} is surjective. Now take {x_i\in \frak{p_i}^{n_i}\setminus \frak{p}_i^{n_i+1}} and let {x} be the preimage of {(x_1, \ldots, x_m)}.

A quick consequence of this is that for any non-zero fractional ideals {I, J} of {R} there are elements {x,y\in Frac(R)} such that {yI} is coprime to {xJ}. The proof is to specify the appropriate orders of vanishing of {x} and {y} to cancel out and make {ord_P(yI)\cdot ord_P(xJ)=0} for all prime ideals {P}. By primary decomposition this shows they are coprime.

Now we need to prove that for any fractional ideals {I_1, \ldots, I_n} we have {I_1\bigoplus\cdots \bigoplus I_n\simeq R^{n-1}\bigoplus I_1\cdots I_n}. By induction it suffices to prove that {I\bigoplus J\simeq R\bigoplus IJ}. If {I} and {J} are coprime, then {I\cap J=IJ} and {I+J\simeq R}, so since the standard exact sequence {0\rightarrow I\cap J\rightarrow I\bigoplus J\rightarrow I+J\rightarrow 0} splits we see that {I\bigoplus J\simeq R\bigoplus IJ}.

The statement follows for arbitrary {I, J} because we can find {x,y\in Frac(R)} such that {xI} and {yJ} are coprime, but {xI\simeq I} and {yJ\simeq J}.

Our structure theorem says that if {P} is a finitely generated projective {R}-module of rank {n}, then {P\simeq R^{n-1}\bigoplus I}. Recall that rank here is defined to be the dimension of {P\otimes_R Frac(R)}, or in other words since {P} is locally free from last time, the rank of the free module after localizing. This is NOT the number of generators of {P}.

To prove the statement we’ll prove by induction that {P} must be a direct sum of {n} fractional ideals which by the rest of the post proves it. The base case just follows from flatness since {P\subset P\otimes_R Frac(R)\simeq Frac(R)}. Now suppose {P} has rank {n}. Let {Q} be a rank {n-1} submodule of {P} and form {0\rightarrow Q\rightarrow P \rightarrow P/Q\rightarrow 0}. Tensoring with {Frac(R)} again shows that {P/Q} is rank {1} and hence projective. Thus the sequence splits and by the inductive hypothesis {P\simeq Q\bigoplus P/Q} is a sum of fractional ideals.

Now for the uniqueness of this representation we only have to worry about which {I} can appear in {P\simeq R^{n-1}\bigoplus I}. It turns out that this is unique up to choice of representative of the class {[I]\in Cl(Frac(R))}. This follows easily because if {[I]=[J]}, then they differ by a principal ideal and hence as {R}-modules {I\simeq J}. If {R^{n-1}\bigoplus I\simeq R^{n-1}\bigoplus J}, then taking the determinant of both sides (top exterior product) kills off the {R^{n-1}} as follows:

{\displaystyle \begin{matrix} \bigwedge^n (R^{n-1}\bigoplus I) & \simeq & \displaystyle \bigoplus_{p+q=n} \bigwedge^p(R^{n-1})\otimes \bigwedge^q(I) \\ & \simeq & \bigwedge^{n-1}R^{n-1}\otimes \bigwedge^1(I) \\ & \simeq & R\otimes I \\ & \simeq & I \end{matrix}}

Lastly, let’s consider one special case of the above theory that will be used in our application next time. If {P} is a rank one projective {R}-module, then {P\simeq I}. It is well known in a Dedekind domain that {I} is generated by at most {2} elements. Thus either {P} is generated by {1} element, in which case it is free, or {P} is generated by exactly {2} elements. Thus an invertible module (locally free of rank {1}) over a Dedekind domain is not free if and only if it is generated by {2} elements. This has a rather bizarre but “intuitively obvious” consequence for elliptic curves.


Modules over Dedekind Domains

We’re going to change topics again, but we’re sticking to the theme of doing things that just barely don’t get covered in a first-year graduate algebra course (maybe some of these things do get covered at some universities) but turn out to be extremely useful in “real life.” Today we’ll start a series of posts on modules over Dedekind domains.

The reason this turns out to be useful is that many examples in algebraic/arithmetic geometry require you to look no further than understanding modules over Dedekind domains. The other extremely useful place this comes up is in algebraic number theory where the integral closure of {\mathbb{Z}} in a number field {K}, called the ring of integers of {K}, is a Dedekind domain. It turns out that understanding these is an important part of classical algebraic number theory. This is important for more modern trends with modular forms and elliptic curves as well, since these appear as the endomorphism ring of elliptic curves with complex multiplication.

I recently spent a long time trying to work something out that just turned out to be standard known theory of modules over Dedekind domains and this motivated me to do this series next. It seems to me that graduate courses do the structure theorem for finitely generated modules over PIDs, and talk about Dedekind domains, but that is where they leave off which means that is where we’ll start.

Recall a Dedekind domain is an integrally closed, Noetherian domain of dimension {1}. There are lots of other characterizations as well, but we’ll just recall those if they are needed. Our main goal is to prove a structure theorem for finitely generated projective modules. We’ll start with something more basic that should be familiar from the PID case. Fix {R} a Dedekind domain and {M} a finitely generated {R}-module. It turns out that {M} is projective if and only if it is flat if and only if it is torsion free.

Maybe this is cheating to assume “first year algebra” to prove this, since my cut-off is quite arbitrary, but torsion free if and only if flat follows immediately by noticing these are local statements. Since the localization of a Dedekind domain is a DVR which is a PID, the equivalence follows from the fact that a finitely generated module is torsion free if and only if it is free over a PID.

Flat always implies projective (under reasonable assumptions). Thus the only thing left to check is that torsion free implies projective. Suppose {M} is torsion free. Note that it is projective if and only if all sequences {0\rightarrow N'\rightarrow N\rightarrow M\rightarrow 0} split. If {M} is projective we can just use the universal property with the identity map {id_M:M\rightarrow M} to obtain a section. If there is section, just take a finite presentation by free modules to realize {M} as a direct summand of a free module.

Now suppose we’re given such a sequence. After any localization it certainly splits because again we have these results for PIDs already. The idea of the rest of the proof is to glue these local splittings into an honest section {M\rightarrow N}. Suppose our local maps are {g_{\frak{m}}:M_\frak{m}\rightarrow N_\frak{m}}. Since we are finitely generated we can look at the denominators of the image of the generators under {g_{\frak{m}}} and multiply them together to give an element {c_\frak{m}} such that {c_\frak{m}\cdot g_{\frak{m}}(M_\frak{m})\subset N}.

Now we make our map by choosing finitely many {c_i:=c_\frak{m_i}} and elements {x_i\in R} such that {\sum x_ic_i=1}. Our section is obtained by “gluing” to make {g: M\rightarrow N}, by {g(a)=\sum x_ic_ig_i(a)}. Now it is just checking that this works, so we won’t do it. The setup was primed to make it work. Thus our three conditions are equivalent.

Now our module can be broken up into a torsion part and a projective part, i.e. {M\simeq T(M)\bigoplus P}. As with the PID case the torsion part {T(M)\simeq \bigoplus R/\frak{p_i}^{n_i}} with the appropriate uniqueness statement. Thus we do get a nice similar structure theorem, but as I said the part that is really interesting for the future application I want to point out is the structure of the projective part.

An Application to Isomorphisms of Varieties

I said we’d apply our pure algebra theory from the past few days to a concrete example today. I’m pretty sure this must be quite well known to experts, but I’ve never seen it written down somewhere. I’ve certainly found use of this fact in my own research.

Let {X/k} be a smooth projective variety with {H^0(X, \mathcal{T})=0}. This just says the variety has no infinitesimal automorphisms and hence we are in a very general situation. This applies to curves of genus {g\geq 2}, or K3 surfaces, or Calabi-Yau varieties of higher dimension, etc. It is just one cohomological criterion that should be relatively easy to check.

The lemma we’ll prove is that given some other variety {Y/k}, these varieties are isomorphic over {k} if and only if the are isomorphic over {k^{perf}}, where {k^{perf}} the perfect closure of {k} inside some fixed algebraic closure {\overline{k}} (the smallest perfect field containing {k} inside of {\overline{k}}). What this lemma allows you to do in practice is sometimes quickly reduce to the case of working over a perfect field which can greatly simplify things.

Here is the proof. Consider the Isom scheme whose functor of points is given by {\text{Isom} (A)=\{X\otimes A \stackrel{\sim}{\rightarrow} Y\otimes A\}} (the {A}-valued points are just the isomorphisms over {A}). In our nice situation this is well-known to be representable by a quasi-projective scheme over {k}. We will simply check now that the functor is formally unramified and hence étale. To simplify notation, call the Isom scheme {Z}.

Let {A} be a {k}-algebra and {I} a square zero ideal. We must check that the natural map {Z(A)\rightarrow Z(A/I)} given by restricting an isomorphism {X\otimes A\rightarrow Y\otimes A} to {X\otimes (A/I)\rightarrow Y\otimes (A/I)} is injective.

Suppose {\phi_1} and {\phi_2} are isomorphisms over {A} that agree over {A/I}. Then {\phi_2^{-1}\circ \phi_1} is an infinitesimal automorphism of {X}. But infinitesimal automorphisms are parametrized by {H^0(X, \mathcal{T})\otimes I=0} by assumption. Thus {\phi_2^{-1}\circ \phi_1} is the identity isomorphism over all of {A} and hence {\phi_1=\phi_2}.

This shows that {Z(A)\rightarrow Z(A/I)} is injective and hence {Z} is étale. We now prove that the canonical map {Z(k)\rightarrow Z(k^{perf})} is a bijection. The fact that {Z} is étale over {k} tells us that it is a finite product of separable field extensions of {k}. Let {E} be the étale algebra representing the Isom functor.

We must show that {T: Hom_k(E, k)\rightarrow Hom_k(E, k^{perf})} given by composing with the embedding {i:k\hookrightarrow k^{perf}}, i.e. {(E\rightarrow k)\mapsto (E\rightarrow k\stackrel{i}{\rightarrow} k^{perf})} is a bijection. Since {T} just composes with an inclusion, it is injective. Since {E} is a product of separable field extensions, any homomorphism {E\rightarrow k^{perf}} must have separable image. The field extension {k^{perf}/k} is purely inseparable, so the image must land inside {k}. This shows that {T} is surjective. Thus our map {\text{Isom}(k)\rightarrow \text{Isom}(k^{perf})} is a bijection.

What we proved is just a restatement of the lemma. Of course any isomorphism of {X} and {Y} over {k} base changes to one over {k^{perf}}, but using the algebra we’ve developed the past few posts we see that an isomorphism over {k^{perf}} uniquely descends to one over {k}.

Étale algebras

One interesting thing we could do at this point is work out the much more interesting structure theory of separable algebras if we relax some of the standing assumptions such as commutativity. Instead, we’re going to start working in a more general situation, but the idea is to put stronger conditions into our definition to keep the structure close to the same.

Note that since we were working over a field all of our algebras were automatically flat. For this reason, they were all something called étale algebras. Geometrically speaking this means they are smooth and the fibers all have dimension {0}. Another equivalent way to say this is that the structure map is flat and unramified. This condition of an algebra being unramified is what we’ll discuss today.

If you look in Milne’s book Étale Cohomology, you’ll find that there’s a mind-boggling large number of equivalent ways to check a map is étale. The way that algebraic geometers (at least those working with functors) tend to check this condition is almost entirely passed over in the book. There is just one quick mention that this method can be done, so today we’ll write down this condition and next time we’ll work out a neat example using it.

The idea of being unramified can maybe be stated more easily in terms of spaces. A variety {X/k} is formally unramified if for any {Y\rightarrow X} and any infinitesimal thickening, say {\overline{Y}} of {Y}, there is at most one way to extend {Y\rightarrow X} to the thickened scheme {\overline{Y}\rightarrow X}. This is intentionally vague, but now if we think about everything being affine, by the equivalence of categories with algebras we could work out what the exact condition should be. One should be careful about the word “formally” existing everywhere, but since we’ll assume our algebras are again unital, associative, commutative, and finite dimensional this formal condition is completely equivalent to being unramified/étale in the usual sense.

An {A}-algebra {B} is called (formally) unramified if for any sequence of {A}-algebras {0\rightarrow I\rightarrow C'\rightarrow C\rightarrow 0} with {I} nilpotent we have that any {A}-map {B\rightarrow C} has at most one extension through {C'}, i.e. there is at most one composition {B\rightarrow C'\rightarrow C} which yields {B\rightarrow C}. If in addition one can always find such an extension, {B} is called an étale {A}-algebra. One might have learned in Hartshorne that this latter condition is called the infinitesimal lifting criterion and implies the map is smooth. We already talked about this here.

To wrap up today I’ll point out why this is the condition that comes up most for me. Suppose you are given the points of some scheme {h_X: \text{k-alg} \rightarrow Set}, but you don’t know much else about it. Maybe you want to check that {X} is étale. If you try to use one of the standard methods, you may get stuck since those other methods tend to assume you know more about {X} already. Merely from knowing the functor of points you only need to check that it is formally unramified by checking that given the situation above {0\rightarrow I\rightarrow C'\rightarrow C\rightarrow 0} the natural map coming from the functor {h_X(C')\rightarrow h_X(C)} is injective.

Next time we’ll work out one common place where this happens, and then see that checking this map is injective (which will be easy in the example) implies some incredibly strong things about the variety {X}.

Separable Algebras 3

Fix a field {k} and let {k_s} be a separable closure. Let {G=Gal(k_s/k)}. Today we prove the strongest structure theorem so far: The category of separable {k}-algebras is anti-equivalent to the category of finite {G}-sets (where the action is continuous). Recall that one way to phrase the {G} action being continuous on {X} is to say that {X} is a union of sets on which the action factors through some finite quotient {Gal(L/k)}.

To show the theorem let’s construct our functor {F: Sep_k \rightarrow G}-Set (I made this notation up just now, so it isn’t standard). Define {F(A)=Hom_k (A, k_s)}. Recall from last time that our separable {k}-algebra must have the form {\prod L_i} where {L_i/k} are finite separable field extensions. Thus a map {A\rightarrow k_s} kills all factors except one which lands inside a finite separable extension of {k}.

The {G}-action on {F(A)} is the one given by acting on {k_s}. More specifically, given {\sigma\in G} and {f\in F(A)}, we need to define a new {k}-algebra map {\sigma\cdot f}, but we do this by mapping {x\mapsto \sigma(f(x))}. If you try this trick in other situations, be careful. It works here because any element {\sigma\in G} fixes {k} and hence preserves the {k}-algebra structure. Suppose {Im(f)\subset E}, then the action factors through {Gal(E/k)} and hence the action is continuous.

Now we get the rest of {F} being a contravariant functor for free because we defined it to be {Hom_k(-, k_s)}, so any {\phi: A\rightarrow B} gives us {F(B)\rightarrow F(A)} by composing {(B\stackrel{f}{\rightarrow} k_s)\mapsto (A\stackrel{\phi}{\rightarrow} B\stackrel{f}{\rightarrow} k_s)}. Of course a morphism in {G-set} must respect the {G}-action, but this is true by construction.

We must check that we have a bijection on Hom sets. Suppose we have a {G}-homomorphism {F(B)\rightarrow F(A)}, i.e. {Hom_k(B, k_s)\rightarrow Hom_k(A, k_s)}. We’ll show that {Hom(F(B),k_s)\simeq B\otimes k_s} and {Hom(F(A),k_s)\simeq A\otimes k_s}. Thus applying {Hom(-,k_s)} to both sides gives us a map {A\otimes k_s\rightarrow B\otimes k_s}. Keeping track of the action we can take the invariants to get {(A\otimes k_s)^G=A\rightarrow (B\otimes k_s)^G=B}. Thus from knowledge of {F(B)\rightarrow F(A)} we can completely recover our map {A\rightarrow B} which shows the functor is fully faithful.

The above argument requires us to keep careful track of the action to know it works. Let’s check the isomorphism {G: A\otimes k_s\simeq Hom(F(A), k_s)}. The map is given by {a\otimes \lambda\mapsto f} where the map {f(x)=x(a)\cdot \lambda} (evaluation on the first factor followed by multiplication). The action on the left is {\sigma\cdot(a\otimes\lambda)=a\otimes \sigma(\lambda)} and the action on the right is conjugation {\sigma\cdot f=f^\sigma}. Let’s check equivariance of {G}. Consider {G(\sigma\cdot(a\otimes\lambda))=x\mapsto x(a)\cdot \sigma(\lambda)}
{=f^\sigma (x)=\sigma\cdot G(a\otimes \lambda)}. Thus the isomorphism preserves the {G}-action and we see the previous paragraph goes through.

Lastly we need to know the functor is essentially surjective. Let {X} be an arbitrary {G}-set. Since {X} is a disjoint union of its orbits and if {X=F(A)\coprod F(B)}, then {X=F(A\times B)}, we may assume without loss of generality assume the action of {G} is transitive. We know that {G} factors through {Gal(L/k)} for some finite extension {L}. Let {x\in X} so that the orbit of {x} is all of {X}. The stabilizer of {x} is some subgroup {H} of {G} and so we can define the fixed field {A=L^H}. Now we’re done, because {F(A)=Hom_k(A, k_s)=Hom_k(A, L)} and the {G}-action is transitive by Galois theory. Thus the map {X\rightarrow F(A)} determined by {x\mapsto (A\hookrightarrow L)} is a {G}-isomorphism.

Our functor {F} is fully faithful and essentially surjective and hence is an anti-equivalence of categories.

Separable Algebras 2

Before continuing with some basics of separable algebras, I want to give some motivation for why we should care. First consider the correspondence between the existence of nilpotents and not being reduced. Separable algebras are related to the notion of being geometrically reduced, i.e. reduced after any base change. This isn’t great motivation, but I think the next one is.

Étale morphisms of varieties are pervasive throughout algebraic geometry. Since they are smooth maps of relative dimension {0}, you could think of them as the replacement for covering spaces from topology in algebraic geometry. The local structure on rings of these maps will give you separable algebras, so if you want to understand the covering space theory and related notions of fundamental groups it would be good to know something about separable algebras.

Lastly, if you care about (affine) group schemes then you’ve probably seen that such a {G} has an exact sequence associated to it called the connected-étale sequence and breaks the group scheme into a semi-direct product of its connected component of the identity and the spectrum of the maximal separable subalgebra. Thus separable subalgebras tell us something about the number of connected components of a variety.

There’s even more motivation, but we’ll move on to the post now. That was just meant to show that this isn’t some vacuous definition that algebraists made up to play around with. Recall we are making standard assumptions that when we use the term algebra it means unital, associative, commutative, and finite dimensional.

Let {k} be any field and fix an algebraic closure {\overline{k}} and separable closure {k_s}. Today we’ll check that a {k}-algebra, {A}, is separable if and only if it satisfies one of the following equivalent conditions:

1) {A\otimes_k \overline{k}} is reduced

2) {A\otimes_k \overline{k}\simeq \overline{k}\times \cdots \times \overline{k}}

3) {A} is a finite product of separable field extensions

4) {A\otimes_k k_s\simeq k_s\times \cdots \times k_s}

Note how strong this structure theorem is. In algebraic geometry terms, suppose you find out somehow that the structure map is étale (maybe by checking a condition we’ll discuss next time). Then your variety is just {X=Spec(A)} where {A} is just a finite product of separable field extensions of the base field {k}. Thus {X} is just a finite collection of points with no weird structure going on at all.

To prove these equivalences we need the fact that any finite dimensional {k}-algebra is a finite product of algebras each of which contain a unique maximal ideal consisting entirely of nilpotents. Since this is a standard first-year algebra fact, we’ll assume it (recall the series is on what “ought” to be taught in first-year algebra). We’ll start by noting that {A} separable is equivalent to 1. The previous fact gives us the equivalence of 1 and 2, since having no nilpotents implies all the maximal ideals must be {0} and hence the algebra must be a product of field extensions.

By definition of tensor product we get 3 implies 4 and 4 implies 2, so all we need to show is 2 implies 3. Now 2 implies that the number of {k}-algebra maps {A\rightarrow \overline{k}} is the dimension of {A}. Also, {\dim A=\sum \dim A_i} where {A_i} are the algebras with nilpotent maximal ideals. Since {A} is separable, these maximal ideals are {0} and hence the {A_i} are field extensions of {k}. If they are separable, then our dimension count matches up, but if even one of them was a non-separable extension, then the dimension will be less due to the multiple root. Thus they all must be separable extensions of {k}. This proves the equivalence of all definitions.

Now that we know the general form of any separable {k}-algebra you might think they are uninteresting, but next time we’ll show there is something interesting going on and it is related to covering spaces/deck transformation/fundamental groups. Then we’ll move on to étale maps in more generality than just really nice algebras over fields.

Separable Algebras 1

In order to actually get blogging again I think I’m going to do a series on things you should learn in first year graduate algebra but don’t. It will consist mostly of short posts that contain examples, definitions, or tricks that come up for me.

Today we’ll discuss separable algebras. Let {k} be a field (it is important that we allow this to possibly be imperfect). Our algebras will always satisfy the standard hypotheses made in commutative algebra books such as Matsumura, so they will be unital, associative, and commutative. A {k}-algebra {A} is called separable if for any field extension {L/k} we have {A\otimes_k L} is reduced (contains no nilpotent elements).

By definition {A} itself must be reduced. The purpose of this post is to show how being reduced over {k} is not enough to guarantee being separable. Now one probably wouldn’t expect this considering you have to be reduced over all base changes to be separable, but it is true that it suffices to check being reduced over {k} if the field is perfect.

We will take as a black box for the purpose of this post that if we choose an algebraic closure {\overline{k}} and define {k^{1/p}=\{x\in \overline{k} : x^p\in k\}} to take all {p}-th roots inside this algebraic closure, then it suffices to check that {A\otimes_k k^{1/p}} is reduced. Thus we really only need to check one base change. Now if you work in characteristic {0} all the time, you might be thinking how could we ever pick up nilpotent elements just by tensoring with a field extension?

This is because in characteristic {0}, your field is automatically perfect and for any perfect field we have {k^{1/p}=k}. Thus we recover the statement that for perfect fields we only have to check that {A} is reduced to see it is separable. Now for our counterexample to the idea that this is all we have to check in general.

Let {k=\mathbb{F}_p(t)}. Define our algebra {\displaystyle A=\frac{k[x]}{(x^p-t)}=k(\eta)} where {\eta^p=t}. Since {A} is just the field extension of {k} we get by adjoining a {p}-th root of {t} it is reduced (fields don’t have nilpotents). But now let’s use our black box to see why the choice of {k^{1/p}} is so important. Note that {\eta\notin k}, but we do have {\eta\in k^{1/p}}.

Now check {\displaystyle A\otimes_k k^{1/p}=\frac{k[x]}{(x-\eta)^p}\otimes_k k^{1/p}\simeq \frac{k^{1/p}[x]}{(x-\eta)^p}} and since {\eta\in k^{1/p}} we have {(x-\eta)\in k^{1/p}[x]} and it is certainly not zero in {A\otimes_k k^{1/p}} but it does have the property that {(x-\eta)^p=0}, i.e. we found a nilpotent element. Thus there are reduced algebras over non-perfect fields that are not separable.