Modules over Dedekind Domains

We’re going to change topics again, but we’re sticking to the theme of doing things that just barely don’t get covered in a first-year graduate algebra course (maybe some of these things do get covered at some universities) but turn out to be extremely useful in “real life.” Today we’ll start a series of posts on modules over Dedekind domains.

The reason this turns out to be useful is that many examples in algebraic/arithmetic geometry require you to look no further than understanding modules over Dedekind domains. The other extremely useful place this comes up is in algebraic number theory where the integral closure of {\mathbb{Z}} in a number field {K}, called the ring of integers of {K}, is a Dedekind domain. It turns out that understanding these is an important part of classical algebraic number theory. This is important for more modern trends with modular forms and elliptic curves as well, since these appear as the endomorphism ring of elliptic curves with complex multiplication.

I recently spent a long time trying to work something out that just turned out to be standard known theory of modules over Dedekind domains and this motivated me to do this series next. It seems to me that graduate courses do the structure theorem for finitely generated modules over PIDs, and talk about Dedekind domains, but that is where they leave off which means that is where we’ll start.

Recall a Dedekind domain is an integrally closed, Noetherian domain of dimension {1}. There are lots of other characterizations as well, but we’ll just recall those if they are needed. Our main goal is to prove a structure theorem for finitely generated projective modules. We’ll start with something more basic that should be familiar from the PID case. Fix {R} a Dedekind domain and {M} a finitely generated {R}-module. It turns out that {M} is projective if and only if it is flat if and only if it is torsion free.

Maybe this is cheating to assume “first year algebra” to prove this, since my cut-off is quite arbitrary, but torsion free if and only if flat follows immediately by noticing these are local statements. Since the localization of a Dedekind domain is a DVR which is a PID, the equivalence follows from the fact that a finitely generated module is torsion free if and only if it is free over a PID.

Flat always implies projective (under reasonable assumptions). Thus the only thing left to check is that torsion free implies projective. Suppose {M} is torsion free. Note that it is projective if and only if all sequences {0\rightarrow N'\rightarrow N\rightarrow M\rightarrow 0} split. If {M} is projective we can just use the universal property with the identity map {id_M:M\rightarrow M} to obtain a section. If there is section, just take a finite presentation by free modules to realize {M} as a direct summand of a free module.

Now suppose we’re given such a sequence. After any localization it certainly splits because again we have these results for PIDs already. The idea of the rest of the proof is to glue these local splittings into an honest section {M\rightarrow N}. Suppose our local maps are {g_{\frak{m}}:M_\frak{m}\rightarrow N_\frak{m}}. Since we are finitely generated we can look at the denominators of the image of the generators under {g_{\frak{m}}} and multiply them together to give an element {c_\frak{m}} such that {c_\frak{m}\cdot g_{\frak{m}}(M_\frak{m})\subset N}.

Now we make our map by choosing finitely many {c_i:=c_\frak{m_i}} and elements {x_i\in R} such that {\sum x_ic_i=1}. Our section is obtained by “gluing” to make {g: M\rightarrow N}, by {g(a)=\sum x_ic_ig_i(a)}. Now it is just checking that this works, so we won’t do it. The setup was primed to make it work. Thus our three conditions are equivalent.

Now our module can be broken up into a torsion part and a projective part, i.e. {M\simeq T(M)\bigoplus P}. As with the PID case the torsion part {T(M)\simeq \bigoplus R/\frak{p_i}^{n_i}} with the appropriate uniqueness statement. Thus we do get a nice similar structure theorem, but as I said the part that is really interesting for the future application I want to point out is the structure of the projective part.


Derived Functors II

First off, we need to get this pesky result out of the way that derived functors are independent of the choice of resolution. So we do this by proving a related result. Suppose \cdots\rightarrow F_i \stackrel{\phi_i}{\rightarrow}F_{i-1}\rightarrow \cdots \rightarrow F_1\stackrel{\phi_1}{\rightarrow} F_0 is a a complex of projective A-modules, and \cdots G_1\stackrel{\psi_1}{\rightarrow} G_0 is a complex of A-modules. Let M=coker \ \phi_1 and N=coker \ \psi_1. Suppose the homology of G vanishes except H_0(G)=N. Then every map \beta\in Hom_A(M, N) is a map induced on H_0 by a map of complexes \alpha : F\to G and is determined up to homotopy by \beta.

Before proving this, note that as a corollary we get that any two projective resolutions are homotopy equivalent, and hence the derived functors have constructed on different resolutions have a natural isomorphism between them.

Proof: I knew I should never have tried to do homological algebra without a good way to do diagrams on wordpress. This is clearer if you draw it out…but the idea for existence is to inductively lift your maps. Lift F_0\to M\to N to \alpha_0: F_0\to G_0, then \alpha_0\phi_1: F_1\to ker(G_0\to N)=im(G_1\to G_0). Thus we lift this to \alpha_1: F_1\to G_1 and continue this process. This gives the map of the complexes.

We now want uniqueness up to homotopy. Suppose we have two lifts of \beta: M\to N say \alpha and \gamma. Then \alpha - \gamma lifts the zero map. i.e. we need only show that any lifting of 0 is homotopic to 0. Suppose then that \eta is a lifting of zero. We need that \eta_i=h_{i-1}\phi_i + \psi_{i+1}h_i for some h_i: F_i\to G_{i+1}. Note that \eta_0 : coker \ \phi_1\to coker \ \psi_1 takes F_0\to im\psi_1. So we lift to h_0: F_0\to G_1 such that \psi_1 h_0=\eta_0. But now \psi_1(h_0\phi_1-\eta_1)=\eta_0\phi_1-\psi_1\eta_1=0. Thus h_0\phi_1-\eta_1 maps into ker \ \psi_1=im \ \psi_2. But F_1 is projective so we can lift to h_1: F_1\to G_2. Repeat this process.

I highly recommend just doing the diagram chasing yourself. This is sort of a mess to read, and so should only be used as a sort of guideline if you get stuck somewhere.

Hmm…what else did I say I would do? Oh right. If you’ve seen homological constructions, then you can probably guess that there is a connecting homomorphism type theorem. i.e. Something that is phrased, “a short exact sequence of complexes induces a long exact sequence in homology.” So this trick tends to be useful in actually calculations of your derived functor. I won’t go through it, since it is just your standard “Snake Lemma” construction.

When I said there was extra structure, I was thinking about going into putting a product on the whole thing to make it into a graded ring, but I’ve decided that that is getting a little far afield for now. This may be the end of my ramblings on derived things for awhile.

The other thing I thought I should mention was my confusion on what this blog has become. I’m at a sort of turning point. I’m not sure if I should eliminate the non-math/mathematical physics and turn it into a blog just on that stuff (it sort of accidentally has shifted to that unofficially), or if I should make a conscious effort to balance things more. There are positives and negatives to both in my mind. Actually changing to a more focused blog would help draw and keep readers that actually care about the stuff I’ve been doing recently. On the other hand, it sort of goes against everything I believe in. But how its been going now, I’ve probably alienated all readers that used to read for philosophy or random art things, and so randomly posting on those things seems sort of pointless if I’ve lost those people and it will just serve to confuse and possibly alienate people only interested in the math side.

No immediate decisions will be made, so there is some time.

Wrapping up the Jacobson Radical

We now have the following equivalent definitions of the Jacobson radical. Remember right now we assume R is commutative with 1.

1) Intersection of all maximal ideals
2) Intersection of the annihilators of all simple left R-modules
3) The set of non-generators of R
4) The set of elements, x, such that 1-rx has a left inverse for all r.

I think I already pointed out that from at least two of these definitions we automatically get that J(R) is a two-sided ideal. Two basic examples are now that if R is any field, then J(R)=\{0\}. And if K is a field, and R=K[[x_1, \ldots x_n]], then J(R)=\{f\in R : f \ has \ 0 \ constant \ term\}. An important generalization is that in any local ring the Jacobson radical is the set of non-units.

An important result called Nakayama’s Lemma is that if M is finitely generated, then M=\Phi(M)+N implies that M=N. Special case: If M= J(R)M+N, then M=N. Corollary to that special case: If M=J(R)M, then $M=\{0\}$ (this last form is what is sometimes called Nakayama’s Lemma).

Proof: Suppose M=\langle x_1+n_1, x_2+n_2, \ldots, x_m+n_m\rangle. Where x_j\in \Phi (M), and n_j\in N for all j. Define S=\{n_1, \ldots, n_m\}.

Then with this setup, we exploit the non-generator definition. Note that
M=\langle x_1, n_1, x_2, n_2, \ldots, x_m, n_m\rangle
= \langle S, x_1, \ldots, x_m\rangle
= \langle S, x_1, \ldots, x_{m-1}\rangle
… etc
= \langle S\rangle \subset N.

And we are done! It may have seemed a little roundabout to go through the “Frattini submodule” in developing the Jacobson radical, but it certainly pays off to have lots of definitions as we see here.

The last little bit I wanted to say was that we can define the Jacobson radical for a ring without identity. I don’t want to go through the details, but a standard trick is to define a new ring (with identity) S=\mathbb{Z}\times R with the standard addition, and then (a,b)(c,d)=(ac, ad+cb+bd). It is pretty basic to check that J(S)=\{0\}\times I where I is some ideal in R (by the fact that J(\mathbb{Z})=\{0\}). It is also just algebraic manipulation to check that I is the largest ideal in R such that for every x\in I there is a y\in I such that x+y-yx=0. This then is our definition. J(R)=\cap_{\mathfrak{I}} I where \mathfrak{I} is the collection of ideals satisfying that property.

The Jacobson Radical Part II

First recall that we showed J(R)=\Phi (R), and hence is a submodule of R as a module over itself. Thus J(R) is a left ideal of R. Next recall that we showed J(R)=J(R)R, and hence is a right ideal. i.e. J(R) is a two-sided ideal.

Let’s now work towards the annihilator definition. Define an equivalence relation the set of maximal ideals of R by I ~J if there is a simple left R-module M with elements a,b\in M such that I=ann_R(a) and J=ann_R(b). We see that this is an equivalence relation, since I ~ J iff R/I and R/J are isomorphic as R-modules. Examine the module homomorphisms r\mapsto ra and r\mapsto rb to see that if I ~ J then R/I\cong M \cong R/J. Also, if R/I\cong R/J by the iso \varphi, then J=ann_R(\varphi^{-1}(1+J)), so I ~ J since \phi^{-1}(1+J), 1+J\in M with I=ann_R(1+J) and J=ann_R(\varphi^{-1}(1+J)).

Now let \mathfrak{I} be an equivalence class of maximal left ideals. I claim that \cap_{I\in\mathfrak{I}} I=ann_R(M), where M is a simple left R-module isomorphic to each R/I, for I\in\mathfrak{I}. By definition and the property above we get that \mathfrak{I}=\{ann_R(a): a\in M, \ a\neq 0\}. Thus if J\in\mathfrak{I}, then J=ann_R(1+J) which means that J=ann_R(a) where \varphi: R/J\to M satisfies \varphi(1+J)=a. But now this gives precisely cap_{I\in \mathfrak{I}}I=ann(M).

Now just intersect over all the maximal left ideals. We get \displaystyle J(R)=\cap_{J \ maximal} J=\cap_{\mathfrak{I}}\cap_{i\in \mathfrak{I}} I=\cap_{\mathfrak{I}}ann_R(R/I)=\cap_{M \ simple}ann_R(M). And voila, we have it. This was a rather terse run-through and assumed a working knowledge of some facts about modules, but I find it to be a rather fascinating take on the development.

Next we’ll exploit some of these definitions to get some properties of the Jacobson radical, and develop it in a method that doesn’t require our ring to have 1.

An Approach to the Jacobson Radical

I’ve decided that in order to better understand the concepts in algebra this quarter, I should probably start posting several times a week on them. The quarter system is fast, and so we only have three weeks left.

I want to develop the Jacobson radical (this is really the next step in all that stuff I was presenting before anyway). So at first we’ll assume our ring R has 1 (which was the case before). The typical development seems to define \displaystyle J(R)=\bigcap_{M \ simple \  R-module}ann_R(M). The Jacobson radical is the intersection of the annihilators of all simple left R-modules (note that the ann_R(M)=\{r\in R : rm=0 \ \forall m\in M\}.

There are many, many alternative formulations, but I want to develop this from one of the more obscure angles.

First, let \Phi(M) be the “Frattini submodule” i.e. the set of nongenerators. x is a nongenerator if we have any subset of M, say S, and R\langle S, x\rangle= M, then R \langle S\rangle = M. So a nongenerator means that if you ever have a set which generates your module, then that set without the nongenerator will still generate it.

Step 1: \Phi(M)=\bigcap_{N<M \ maximal}N.

Suppose x\notin \Phi(M). Claim: there is a maximal submodule N such that x\notin N. Let S be such that M=R\langle S, x \rangle and M\neq R\langle S\rangle. Then since x is not a nongenerator we can choose S so that x\notin R\langle S\rangle. Now R\langle S\rangle \in P=\{H : H<M, S\subset H, x\notin H\}. We have a non-empty partially ordered set, and unioning gives an upper bound to any chain. Thus we apply Zorn’s Lemma to get a maximal element, N. Thus \displaystyle \Phi(M)\supset\bigcap_{N<M \ maximal}N.

For the reverse, just note that if x is not a member of the right side, then there is a particular maximal submodule, N, such that x\notin N. Thus M=R\langle N, x\rangle, but R\langle N\rangle=N. So x\notin \Phi(M). And we get equality \Phi(M)=\bigcap_{N<M \ maximal}N. So the Frattini submodule equals the intersection of all maximal left R-modules.

Step 2: Define the Jacobson radical now to be J(R)=\{x\in R: 1-rx \ has \ left \ inverse \ \forall r\in R\}. Claim: \Phi(M)\subset J(R).
Suppose x\notin J(R). Then there is some r\in R such that 1-rx has no left inverse. i.e. 1\notin R(1-rx)\subset N where N is a maximal left ideal. But 1-rx\in N since 1\in R. Thus x\not in N because otherwise would mean that 1=(1-rx)+rx\in N a contradiction. So x\notin \Phi(R). Note the subtle shift in usage here. We are considering R to be an R-module over itself.

Step 3: J(R)M\subset \Phi(M).

Suppose x\in J(R) and m\in M. Claim: xm\in \Phi(M). Suppose that M=R\langle S, xm\rangle and m\in M. Then we have that m=\sum (r_js_j) +rxm. i.e. m-rxm=\sum r_js_j\Rightarrow (1-rx)m=\sum r_js_j.

Suppose b(1-rx)=1 (since x\in J(R)). Now m=b(1-rx)m=\sum br_js_j\in R\langle S\rangle. Thus xm\in R\langle S\rangle\Rightarrow R\langle S\rangle=R\langle S, xm\rangle = M. And we are done.

Now notice from the above steps that \Phi(R)\subset J(R)=J(R)\cdot \{1\}\subset J(R)\cdot R\subset \Phi(R). And so we have equality all the way through. Namely, \Phi(R)=J(R). So the set of all nongenerators, the intersection of all maximal submodules, and the set of elements such that 1-rx has a left inverse for all r\in R are all the same (namely, the Jacobson radical).

Next time we will bank on these to show that these are all equivalent to the first listed and more common definition, the intersection the annihilators of all simple left R-modules.

(I claim no responsibility for the invention of this approach. This is the way my professor sees the world).