A Mind for Madness

Musings on art, philosophy, mathematics, and physics


3 Comments

Krull Dimension

I didn’t actually want to take that long of a break before this post, but I had to do a final exam and give/grade a final, so that ate up lots of time. The next natural thing to move on to is something called Krull dimension. This is sort of annoying to define, but highly useful. I’ve also decided I’m going to stop “fraking” my \frak{p}‘s, since it is annoying to type and just use capital P’s for prime ideals.

First we need to define something I’ll call “height.” A prime chain is a strictly decreasing chain of prime ideals: P_0\supsetneq P_1 \supsetneq \cdots \supsetneq P_n. Now we define the height of a prime ideal P, ht(P), to be the length of the longest prime chain with P=P_0.

Some quick examples: It is easy to check that ht(P)=0 if and only if P is minimal, and hence in an integral domain ht(P)=0 if and only if P=\{0\}. Let R=k[x_1, x_2, \ldots] where k is a field. Then let P_i=(x_i, x_{i+1}, \ldots) be the prime ideal generated by those indeterminants (check that it is prime easily by noting R/P_i\cong k[x_1, \ldots , x_{i-1}] which is clearly an integral domain). Then for any n, we can make a chain P_1\supsetneq P_2\supsetneq \cdots \supsetneq P_{n+1}. Thus ht(P_1)=\infty.

Now for the actual definition we want to work with. I’ll denote the Krull dimension simply by “dim” rather than “Krulldim”. Then we define: dim(R)=\sup\{ht(P) : P\in Spec(R)\}. So our quick example here is that for integral domains, dim(R)=0 if and only if R is a field.

My goal for the day is to characterize all Noetherian rings of dim 0. The claim is that dim(R)=0 if and only if every finitely generated R-module M has a composition series. Since R is Noetherian, there are only finitely many minimal prime ideals. Since dim(R)=0, every prime ideal is minimal and hence there are only finitely many. Let’s call them P_1, \ldots, P_n.

Let’s look at the nilradical: \sqrt{R}=\cap P_i. Since the radical is nilpotent, there is some m such that (\sqrt{R})^m=\{0\}. So we define N=P_1\cdots P_n\subset P_1\cap\cdots\cap P_n=\sqrt{R}, so N^m=\{0\}.

Let M be a finitely generated R-module. Then we have the chain M\supset P_1 M\supset P_1P_2M\supset\cdots\supset NM. Now note that as a module \frac{P_1\cdots P_{i-1}M}{P_1\cdots P_i M} is an R/P_i-module. But P_i is maximal and so R/P_i a field, so it is a vector space. But M is finitely generated, so finite-dimensional, thus we can refine the chain so that all factors are simple.

Now we do this same trick on each of the chains j=1, \ldots, m: N^jM\supset P_1N^jM\supset\cdots \supset N^{j+1}M. Since at the m stage we get N^m=\{0\}, we have a composition series for M.

For the converse suppose every finitely generated R-module has a composition series. Dimension zero is equivalent to showing that R has no prime ideals P, and Q such that P\supsetneq Q. Suppose such exist. Let’s pass to the quotient, R/Q, and reinterpret our hypothesis. Then R is an integral domain that has a nonzero prime ideal and a composition series R\supset I_1\supset \cdots \supset I_d\neq \{0\}. So I_d is minimal. Let x\in I_d be any nonzero element. Then since x I_d\subset I_d and xI_d\neq \{0\} (we’re in a domain), then by minimality we have xI_d=I_d. So there is a y\in I_d such that xy=x, i.e. y=1\in I_d. And hence I_d=R. Thus R is a field which contradicts our having a nonzero prime ideal.

Well, I think that is enough fun for one day. I may post again tomorrow, since my final is Wed.


1 Comment

Lying Over and Going Up Part II

I realized there was one more result I probably should have included last time. Oh well. Here goes:

Let R^*/R be integral, \frak{p} a prime ideal in R and \frak{p}^*, \frak{q}^* prime ideals in R^* lying over \frak{p}. If \frak{p}^*\subset \frak{q}^*, then \frak{p}^*=\frak{q}^*.

Proof: Recall that S^{-1}R^* is integral over S^{-1}R by last time for any multiplicative set, and also that prime ideals are preserved in rings of fractions. Thus the hypotheses still hold if we localize at \frak{p}. Thus R_\frak{p}^* is integral over R_\frak{p}, and \frak{p}^*R_\frak{p}^*\subset \frak{q}^*R_\frak{p}^* are prime ideals. Thus we can WLOG replace R^* and R by their localizations and hence assume they are local. So now \frak{p} is a maximal ideal in R. Thus by last time \frak{p}^* is maximal. Since \frak{p}^*\subset \frak{q}^*, we have \frak{p}^*=\frak{q}^*.

Now we are ready for the two big theorems. Here is the “Lying Over” Theorem. Let R^*/R be an integral extension. If \frak{p} is a prime ideal in R, then there is a prime ideal \frak{p}^* in R^* lying over \frak{p}, i.e. \frak{p}^*\cap R=\frak{p}.

Proof: First note that R \stackrel{i}{\longrightarrow} R^* \stackrel{h^*}{\longrightarrow} S^{-1}R^* and R \stackrel{h}{\longrightarrow} R_\frak{p} \stackrel{j}{\longrightarrow} S^{-1}R^* form the two sides of a commutative diagram. By last time S^{-1}R^* is integral over R_\frak{p}. Choose a maximal ideal \frak{m}^* in S^{-1}R^*. Thus \frak{m}^*\cap R_\frak{p} is maximal in R_\frak{p}. But R_{\frak{p}} is local with unique max ideal \frak{p}R_\frak{p}, so \frak{m}^*\cap R_\frak{p}=\frak{p} R_\frak{p}. But the preimage of a prime ideal is prime, so \frak{p}^8=(h^*)^{-1}(\frak{m}^*) is a prime ideal in R^*.

Now we just diagram chase: (h^*i)^{-1}(\frak{m}^*)=i^{-1}(h^*)^{-1}(\frak{m}^*)=i^{-1}(\frak{p}^*)=\frak{p}^*\cap R. And also: (jh)^{-1}(\frak{m}^*)=h^{-1}j^{-1}(\frak{m}^*)=h^{-1}(\frak{m}^*\cap R_\frak{p})=h^{-1}(\frak{p}R_\frak{p})=\frak{p}.

Thus \frak{p}^* lies over \frak{p}.

Our other big theorem is the one about “Going Up”: If R^*/R is an integral extension and \frak{p}\subset \frak{q} are prime in R, and \frak{p}^* lies over \frak{p}, then there is a prime ideal \frak{q}^* lying over \frak{q} with \frak{p}^*\subset \frak{q}^*.

Proof: By last time (R^*/\frak{p}^*)/(R/\frak{p}) is an integral extension where R/\frak{p} is embedded in R^*/\frak{p}^* as (R+\frak{p}^*)/\frak{p}^*. Now we just replace R^* and R by these rings so that both \frak{p}^* and \frak{p} are \{0\}. Now we just apply the Lying Over Theorem to get our result.

So as we see here integral extensions behave extremely nicely. These theorems guarantee that se always have prime ideals lying over ones in the lower field. This has some important applications to the Krull dimension that we’ll start looking at next time.


Leave a comment

Lying Over and Going Up

If you haven’t heard the terms in the title of this post, then you are probably bracing yourself for this to be some weird post on innuendos or something. Let’s first do some motivation (something I’m not often good at…remember that Jacobson radical series of posts? What is that even used for? Maybe at a later date we’ll return to such questions). We can do ring extensions just as we do field extensions, but they tend to be messier for obvious reasons. So we want some sort of property that will force an extension to be with respect to prime ideals. Two such properties are “lying over” and “going up.”

Let R^*/R be a ring extension. Then we say it satisfies “lying over” if for every prime ideal \mathfrak{p}\subset R in the base, there is a prime ideal \mathfrak{p}^*\subset R^* in the extension such that \mathfrak{p}^*\cap R=\mathfrak{p}. We say that R^*/R satisfies “going up” if in the base ring \mathfrak{p}\subset\mathfrak{q} are prime ideals, and \mathfrak{p}^* lies over \mathfrak{p}, then there is a prime ideal \mathfrak{q}^*\supset \mathfrak{p}^* which lies over \mathfrak{q}. (Remember that Spec is a contravariant functor).

Note that if we are lucky a whole bunch of posts of mine will finally be tied together and this was completely unplanned (spec, primality, localization, even *gasp* the Jacobson radical). First, let’s lay down a Lemma we will need:

Let R^* be an integral extension of R. Then
i) If \mathfrak{p} a prime ideal of R and \mathfrak{p}^* lies over \mathfrak{p}, then R^*/\frak{p}^* is integral over R/\mathfrak{p}.
ii) If S\subset R, then S^{-1}R^* is integral over S^{-1}R.

Proof: By the second iso theorem R/\frak{p}=R/(\frak{p}^*\cap R)\cong (R+\frak{p}^*)/\frak{p}^*\subset R^*/\frak{p}^*, so we can consider R/\frak{p} as a subring of R^*/\frak{p}^*. Take any element a+\frak{p}^*\in R^*/\frak{p}^*. By integrality there is an equation a^n+r_{n-1}a^{n-1}+\cdots + r_0=0 with the r_i\in R. Now just take everything \mod \frak{p}^* to get that a+\frak{p}^* integral over R/\frak{p}. This yields part (i).

For part (ii), let a^*\in S^{-1}R^*, then a^*=a/b, where a\in R^* and b\in\overline{S}. By integrality again we have that a^n+r_{n-1}a^{n-1}+\cdots + r_0=0, so we multiply through by 1/b^n in the ring of quotients to get (a/b)^n+(r_{n-1}/b)(a/b)^{n-1}+\cdots +r_0/b^n=0. Thus a/b is integral over S^{-1}R.

I’ll do two quick results from here that will hopefully put us in a place to tackle the two big results of Cohen and Seidenberg next time.

First: If R^*/R is an integral ring extension, then R^* is a field if and only if R is a field. If you want to prove this, there are no new techniques from what was done above, but you won’t explicitly use the above result, so I won’t go through it.

Second: If R^*/R is an integral ring extension, ten if \frak{p} is a prime ideal in R and \frak{p}^* is a prime ideal lying over \frak{p}, then \frak{p} is maximal if and only if \frak{p}^* is maximal.

Proof: By part (i) of above, R^*/\frak{p}^* is integral over R/\frak{p} and so as a corollary to “First” we have one is a field if and only if the other is. This is precisely the statement that \frak{p} is maximal iff \frak{p}^* is maximal.


Leave a comment

Wrapping up the Jacobson Radical

We now have the following equivalent definitions of the Jacobson radical. Remember right now we assume R is commutative with 1.

1) Intersection of all maximal ideals
2) Intersection of the annihilators of all simple left R-modules
3) The set of non-generators of R
4) The set of elements, x, such that 1-rx has a left inverse for all r.

I think I already pointed out that from at least two of these definitions we automatically get that J(R) is a two-sided ideal. Two basic examples are now that if R is any field, then J(R)=\{0\}. And if K is a field, and R=K[[x_1, \ldots x_n]], then J(R)=\{f\in R : f \ has \ 0 \ constant \ term\}. An important generalization is that in any local ring the Jacobson radical is the set of non-units.

An important result called Nakayama’s Lemma is that if M is finitely generated, then M=\Phi(M)+N implies that M=N. Special case: If M= J(R)M+N, then M=N. Corollary to that special case: If M=J(R)M, then $M=\{0\}$ (this last form is what is sometimes called Nakayama’s Lemma).

Proof: Suppose M=\langle x_1+n_1, x_2+n_2, \ldots, x_m+n_m\rangle. Where x_j\in \Phi (M), and n_j\in N for all j. Define S=\{n_1, \ldots, n_m\}.

Then with this setup, we exploit the non-generator definition. Note that
M=\langle x_1, n_1, x_2, n_2, \ldots, x_m, n_m\rangle
= \langle S, x_1, \ldots, x_m\rangle
= \langle S, x_1, \ldots, x_{m-1}\rangle
… etc
= \langle S\rangle \subset N.

And we are done! It may have seemed a little roundabout to go through the “Frattini submodule” in developing the Jacobson radical, but it certainly pays off to have lots of definitions as we see here.

The last little bit I wanted to say was that we can define the Jacobson radical for a ring without identity. I don’t want to go through the details, but a standard trick is to define a new ring (with identity) S=\mathbb{Z}\times R with the standard addition, and then (a,b)(c,d)=(ac, ad+cb+bd). It is pretty basic to check that J(S)=\{0\}\times I where I is some ideal in R (by the fact that J(\mathbb{Z})=\{0\}). It is also just algebraic manipulation to check that I is the largest ideal in R such that for every x\in I there is a y\in I such that x+y-yx=0. This then is our definition. J(R)=\cap_{\mathfrak{I}} I where \mathfrak{I} is the collection of ideals satisfying that property.


Leave a comment

The Jacobson Radical Part II

First recall that we showed J(R)=\Phi (R), and hence is a submodule of R as a module over itself. Thus J(R) is a left ideal of R. Next recall that we showed J(R)=J(R)R, and hence is a right ideal. i.e. J(R) is a two-sided ideal.

Let’s now work towards the annihilator definition. Define an equivalence relation the set of maximal ideals of R by I ~J if there is a simple left R-module M with elements a,b\in M such that I=ann_R(a) and J=ann_R(b). We see that this is an equivalence relation, since I ~ J iff R/I and R/J are isomorphic as R-modules. Examine the module homomorphisms r\mapsto ra and r\mapsto rb to see that if I ~ J then R/I\cong M \cong R/J. Also, if R/I\cong R/J by the iso \varphi, then J=ann_R(\varphi^{-1}(1+J)), so I ~ J since \phi^{-1}(1+J), 1+J\in M with I=ann_R(1+J) and J=ann_R(\varphi^{-1}(1+J)).

Now let \mathfrak{I} be an equivalence class of maximal left ideals. I claim that \cap_{I\in\mathfrak{I}} I=ann_R(M), where M is a simple left R-module isomorphic to each R/I, for I\in\mathfrak{I}. By definition and the property above we get that \mathfrak{I}=\{ann_R(a): a\in M, \ a\neq 0\}. Thus if J\in\mathfrak{I}, then J=ann_R(1+J) which means that J=ann_R(a) where \varphi: R/J\to M satisfies \varphi(1+J)=a. But now this gives precisely cap_{I\in \mathfrak{I}}I=ann(M).

Now just intersect over all the maximal left ideals. We get \displaystyle J(R)=\cap_{J \ maximal} J=\cap_{\mathfrak{I}}\cap_{i\in \mathfrak{I}} I=\cap_{\mathfrak{I}}ann_R(R/I)=\cap_{M \ simple}ann_R(M). And voila, we have it. This was a rather terse run-through and assumed a working knowledge of some facts about modules, but I find it to be a rather fascinating take on the development.

Next we’ll exploit some of these definitions to get some properties of the Jacobson radical, and develop it in a method that doesn’t require our ring to have 1.


1 Comment

A closer look at Spec

Let’s think about what is going on in a different way. So now let’s think of f \in R elements of the ring as functions with domain Spec(R). We define the value of the function at a point in our space f(P) to be the residue class in R/P. This looks weird at first, since the image space depends on the point that you are evaluating the function.

Before worrying about that too much, let’s see if we can get this notion to match up with what we did yesterday. We have the nice property that f(P)=0 if and only if f \in P. (Remember that even though we think of f as a function, it is really an element of the ring).

Define for any subset of the ring S the zero set: Z(S)=\{P\in Spec(R): f(P)=0, \forall f \in S\}. Now from what I just noted in the previous paragraph, we get that these are just precisely the elements of Spec(R) that contain S, i.e. the closed sets of the Zariski topology. Thus we can define our basis for the Zariski topology to be the collection of D(f)=Spec(R)\setminus Z(f).

We also will want what is “an inverse” to the zero set. We want the ideal that vanishes on a subset of Spec. So given Y\subset Spec(R), define I(Y)=\{f \in R : f(P)=0, \forall P\in Y\}. Now this isn’t really an inverse, but we get close in the following sense:

If J\subset R is an ideal, then \displaystyle I(Z(J))=\sqrt{J}. Taking the ideal of the zero set is the radical of the ideal. And the radical has two equivalent definitions: \displaystyle \sqrt{J}=\cap_{P\in Spec(R), P\supset J} P=\{a\in R : \exists n\in \mathbb{N},  a^n\in J\}.

If we take the ideal and zero set in the other order we get that Z(I(Y))=\overline{Y} : the closure in the Zariski topology.

We can abstract one step further and put a sheaf on D(f). Note that for any f\in R we have that \{1, f, f^2, \ldots\} is a multiplicative set, so we can localize at it. Since I haven’t talked at all about sheaves, I’m not sure if I want to go any further with this, so maybe I’ll do some more examples next time and possibly start to scratch this surface.


1 Comment

Spec? You mean like glasses?

So I’ve built up localization starting there, and I’ve built up the theory of prime ideals scattered throughout, but ending here. I also just assume the basics of topology in my posts, so we are in the perfect position to talk about a very fascinating construction and incredibly useful tool that combines all these things.

Warning: I have just started learning about this stuff, so it could be riddled with confusion or error. Luckily, I’m just posting the basics which some readers probably know like the back of their hand and will hopefully point out problems.

Of course what I’m referring to is Spec. As usual let’s assume that R is a commutative ring with 1 (I don’t think we need the 1). Then Spec(R)=\{P : P \  prime \ ideal \ of \ R\}. So we have the collection of all (proper) prime ideals of the ring. Other than prime ideals being my favorite type of ideal, this seems to be useless right now.

Let’s put a topology on our set now (the “points” of our space are prime ideals). Let asubset R be any ideal. Define V(a)=\{\mathfrak{p}\in Spec(R) : a \ subset \ \mathfrak{p}\}. Then we define the closed sets of the topology to be the family of all such sets, i.e. \{V(a) : a \ subset \ R \ an \ ideal\} are the closed sets. This is known as the Zariski topology.

To check that these really satisfy the right axioms, (I won’t go through it, but) note that V(0)=Spec(R), V(R)=\emptyset, V(\sum a_i)=\cap V(a_i) and V(a \cap b)=V(a)\cup V(b) (The last is probably the least trivial, but they all follow in a straightforward from definition way).

Examples:

1. If our ring is a field k, then Spec(k)=\{*\} the spectrum is a point.

2.Another common example would be Spec(\mathbb{Z})=\{(0), (2), (3), (5), ldots \}. In other words, the prime ideals can just be identified with the prime number that generates them (and we have (0) as a special circumstance). So open sets are subsets of \mathbb{Z} that are missing finitely many prime numbers. So we see that the Zariski topology is not Hausdorff (and rarely is). It will, however, always be compact.

3. Possibly the most important examples are the ones dealing with polynomial rings. In the nicest case, when k is an algebraically closed field, we have that Spec(k[x])=\{*\}\cup k since the prime ideals are just multiples of linear polynomials, we have the bijection of sending any c \in k to the prime ideal generated by (x-c) (and we still have that pesky “zero” floating around that we’ll talk about later).

Last for today is that Spec is a contravariant functor from rings to topological spaces. We’ve basically done everything we need, since we see how it takes a ring object to a Top object. Also if we have a ring hom f:R \to S, then define Spec(f)=f^* : Spec(S)\to Spec(R) in the obvious way, i.e. \mathfrak{p} \mapsto f^{-1}(\mathfrak{p}).

I promised some localization and we should be able to get to that next time, but there is just so much going on here that it is nearly impossible to exhaust (well, from my perspective as a newbie to the topic).

Follow

Get every new post delivered to your Inbox.

Join 182 other followers