A Mind for Madness

Musings on art, philosophy, mathematics, and physics


3 Comments

Krull Dimension

I didn’t actually want to take that long of a break before this post, but I had to do a final exam and give/grade a final, so that ate up lots of time. The next natural thing to move on to is something called Krull dimension. This is sort of annoying to define, but highly useful. I’ve also decided I’m going to stop “fraking” my \frak{p}‘s, since it is annoying to type and just use capital P’s for prime ideals.

First we need to define something I’ll call “height.” A prime chain is a strictly decreasing chain of prime ideals: P_0\supsetneq P_1 \supsetneq \cdots \supsetneq P_n. Now we define the height of a prime ideal P, ht(P), to be the length of the longest prime chain with P=P_0.

Some quick examples: It is easy to check that ht(P)=0 if and only if P is minimal, and hence in an integral domain ht(P)=0 if and only if P=\{0\}. Let R=k[x_1, x_2, \ldots] where k is a field. Then let P_i=(x_i, x_{i+1}, \ldots) be the prime ideal generated by those indeterminants (check that it is prime easily by noting R/P_i\cong k[x_1, \ldots , x_{i-1}] which is clearly an integral domain). Then for any n, we can make a chain P_1\supsetneq P_2\supsetneq \cdots \supsetneq P_{n+1}. Thus ht(P_1)=\infty.

Now for the actual definition we want to work with. I’ll denote the Krull dimension simply by “dim” rather than “Krulldim”. Then we define: dim(R)=\sup\{ht(P) : P\in Spec(R)\}. So our quick example here is that for integral domains, dim(R)=0 if and only if R is a field.

My goal for the day is to characterize all Noetherian rings of dim 0. The claim is that dim(R)=0 if and only if every finitely generated R-module M has a composition series. Since R is Noetherian, there are only finitely many minimal prime ideals. Since dim(R)=0, every prime ideal is minimal and hence there are only finitely many. Let’s call them P_1, \ldots, P_n.

Let’s look at the nilradical: \sqrt{R}=\cap P_i. Since the radical is nilpotent, there is some m such that (\sqrt{R})^m=\{0\}. So we define N=P_1\cdots P_n\subset P_1\cap\cdots\cap P_n=\sqrt{R}, so N^m=\{0\}.

Let M be a finitely generated R-module. Then we have the chain M\supset P_1 M\supset P_1P_2M\supset\cdots\supset NM. Now note that as a module \frac{P_1\cdots P_{i-1}M}{P_1\cdots P_i M} is an R/P_i-module. But P_i is maximal and so R/P_i a field, so it is a vector space. But M is finitely generated, so finite-dimensional, thus we can refine the chain so that all factors are simple.

Now we do this same trick on each of the chains j=1, \ldots, m: N^jM\supset P_1N^jM\supset\cdots \supset N^{j+1}M. Since at the m stage we get N^m=\{0\}, we have a composition series for M.

For the converse suppose every finitely generated R-module has a composition series. Dimension zero is equivalent to showing that R has no prime ideals P, and Q such that P\supsetneq Q. Suppose such exist. Let’s pass to the quotient, R/Q, and reinterpret our hypothesis. Then R is an integral domain that has a nonzero prime ideal and a composition series R\supset I_1\supset \cdots \supset I_d\neq \{0\}. So I_d is minimal. Let x\in I_d be any nonzero element. Then since x I_d\subset I_d and xI_d\neq \{0\} (we’re in a domain), then by minimality we have xI_d=I_d. So there is a y\in I_d such that xy=x, i.e. y=1\in I_d. And hence I_d=R. Thus R is a field which contradicts our having a nonzero prime ideal.

Well, I think that is enough fun for one day. I may post again tomorrow, since my final is Wed.


1 Comment

Lying Over and Going Up Part II

I realized there was one more result I probably should have included last time. Oh well. Here goes:

Let R^*/R be integral, \frak{p} a prime ideal in R and \frak{p}^*, \frak{q}^* prime ideals in R^* lying over \frak{p}. If \frak{p}^*\subset \frak{q}^*, then \frak{p}^*=\frak{q}^*.

Proof: Recall that S^{-1}R^* is integral over S^{-1}R by last time for any multiplicative set, and also that prime ideals are preserved in rings of fractions. Thus the hypotheses still hold if we localize at \frak{p}. Thus R_\frak{p}^* is integral over R_\frak{p}, and \frak{p}^*R_\frak{p}^*\subset \frak{q}^*R_\frak{p}^* are prime ideals. Thus we can WLOG replace R^* and R by their localizations and hence assume they are local. So now \frak{p} is a maximal ideal in R. Thus by last time \frak{p}^* is maximal. Since \frak{p}^*\subset \frak{q}^*, we have \frak{p}^*=\frak{q}^*.

Now we are ready for the two big theorems. Here is the “Lying Over” Theorem. Let R^*/R be an integral extension. If \frak{p} is a prime ideal in R, then there is a prime ideal \frak{p}^* in R^* lying over \frak{p}, i.e. \frak{p}^*\cap R=\frak{p}.

Proof: First note that R \stackrel{i}{\longrightarrow} R^* \stackrel{h^*}{\longrightarrow} S^{-1}R^* and R \stackrel{h}{\longrightarrow} R_\frak{p} \stackrel{j}{\longrightarrow} S^{-1}R^* form the two sides of a commutative diagram. By last time S^{-1}R^* is integral over R_\frak{p}. Choose a maximal ideal \frak{m}^* in S^{-1}R^*. Thus \frak{m}^*\cap R_\frak{p} is maximal in R_\frak{p}. But R_{\frak{p}} is local with unique max ideal \frak{p}R_\frak{p}, so \frak{m}^*\cap R_\frak{p}=\frak{p} R_\frak{p}. But the preimage of a prime ideal is prime, so \frak{p}^8=(h^*)^{-1}(\frak{m}^*) is a prime ideal in R^*.

Now we just diagram chase: (h^*i)^{-1}(\frak{m}^*)=i^{-1}(h^*)^{-1}(\frak{m}^*)=i^{-1}(\frak{p}^*)=\frak{p}^*\cap R. And also: (jh)^{-1}(\frak{m}^*)=h^{-1}j^{-1}(\frak{m}^*)=h^{-1}(\frak{m}^*\cap R_\frak{p})=h^{-1}(\frak{p}R_\frak{p})=\frak{p}.

Thus \frak{p}^* lies over \frak{p}.

Our other big theorem is the one about “Going Up”: If R^*/R is an integral extension and \frak{p}\subset \frak{q} are prime in R, and \frak{p}^* lies over \frak{p}, then there is a prime ideal \frak{q}^* lying over \frak{q} with \frak{p}^*\subset \frak{q}^*.

Proof: By last time (R^*/\frak{p}^*)/(R/\frak{p}) is an integral extension where R/\frak{p} is embedded in R^*/\frak{p}^* as (R+\frak{p}^*)/\frak{p}^*. Now we just replace R^* and R by these rings so that both \frak{p}^* and \frak{p} are \{0\}. Now we just apply the Lying Over Theorem to get our result.

So as we see here integral extensions behave extremely nicely. These theorems guarantee that se always have prime ideals lying over ones in the lower field. This has some important applications to the Krull dimension that we’ll start looking at next time.


Leave a comment

Lying Over and Going Up

If you haven’t heard the terms in the title of this post, then you are probably bracing yourself for this to be some weird post on innuendos or something. Let’s first do some motivation (something I’m not often good at…remember that Jacobson radical series of posts? What is that even used for? Maybe at a later date we’ll return to such questions). We can do ring extensions just as we do field extensions, but they tend to be messier for obvious reasons. So we want some sort of property that will force an extension to be with respect to prime ideals. Two such properties are “lying over” and “going up.”

Let R^*/R be a ring extension. Then we say it satisfies “lying over” if for every prime ideal \mathfrak{p}\subset R in the base, there is a prime ideal \mathfrak{p}^*\subset R^* in the extension such that \mathfrak{p}^*\cap R=\mathfrak{p}. We say that R^*/R satisfies “going up” if in the base ring \mathfrak{p}\subset\mathfrak{q} are prime ideals, and \mathfrak{p}^* lies over \mathfrak{p}, then there is a prime ideal \mathfrak{q}^*\supset \mathfrak{p}^* which lies over \mathfrak{q}. (Remember that Spec is a contravariant functor).

Note that if we are lucky a whole bunch of posts of mine will finally be tied together and this was completely unplanned (spec, primality, localization, even *gasp* the Jacobson radical). First, let’s lay down a Lemma we will need:

Let R^* be an integral extension of R. Then
i) If \mathfrak{p} a prime ideal of R and \mathfrak{p}^* lies over \mathfrak{p}, then R^*/\frak{p}^* is integral over R/\mathfrak{p}.
ii) If S\subset R, then S^{-1}R^* is integral over S^{-1}R.

Proof: By the second iso theorem R/\frak{p}=R/(\frak{p}^*\cap R)\cong (R+\frak{p}^*)/\frak{p}^*\subset R^*/\frak{p}^*, so we can consider R/\frak{p} as a subring of R^*/\frak{p}^*. Take any element a+\frak{p}^*\in R^*/\frak{p}^*. By integrality there is an equation a^n+r_{n-1}a^{n-1}+\cdots + r_0=0 with the r_i\in R. Now just take everything \mod \frak{p}^* to get that a+\frak{p}^* integral over R/\frak{p}. This yields part (i).

For part (ii), let a^*\in S^{-1}R^*, then a^*=a/b, where a\in R^* and b\in\overline{S}. By integrality again we have that a^n+r_{n-1}a^{n-1}+\cdots + r_0=0, so we multiply through by 1/b^n in the ring of quotients to get (a/b)^n+(r_{n-1}/b)(a/b)^{n-1}+\cdots +r_0/b^n=0. Thus a/b is integral over S^{-1}R.

I’ll do two quick results from here that will hopefully put us in a place to tackle the two big results of Cohen and Seidenberg next time.

First: If R^*/R is an integral ring extension, then R^* is a field if and only if R is a field. If you want to prove this, there are no new techniques from what was done above, but you won’t explicitly use the above result, so I won’t go through it.

Second: If R^*/R is an integral ring extension, ten if \frak{p} is a prime ideal in R and \frak{p}^* is a prime ideal lying over \frak{p}, then \frak{p} is maximal if and only if \frak{p}^* is maximal.

Proof: By part (i) of above, R^*/\frak{p}^* is integral over R/\frak{p} and so as a corollary to “First” we have one is a field if and only if the other is. This is precisely the statement that \frak{p} is maximal iff \frak{p}^* is maximal.


Leave a comment

Wrapping up the Jacobson Radical

We now have the following equivalent definitions of the Jacobson radical. Remember right now we assume R is commutative with 1.

1) Intersection of all maximal ideals
2) Intersection of the annihilators of all simple left R-modules
3) The set of non-generators of R
4) The set of elements, x, such that 1-rx has a left inverse for all r.

I think I already pointed out that from at least two of these definitions we automatically get that J(R) is a two-sided ideal. Two basic examples are now that if R is any field, then J(R)=\{0\}. And if K is a field, and R=K[[x_1, \ldots x_n]], then J(R)=\{f\in R : f \ has \ 0 \ constant \ term\}. An important generalization is that in any local ring the Jacobson radical is the set of non-units.

An important result called Nakayama’s Lemma is that if M is finitely generated, then M=\Phi(M)+N implies that M=N. Special case: If M= J(R)M+N, then M=N. Corollary to that special case: If M=J(R)M, then $M=\{0\}$ (this last form is what is sometimes called Nakayama’s Lemma).

Proof: Suppose M=\langle x_1+n_1, x_2+n_2, \ldots, x_m+n_m\rangle. Where x_j\in \Phi (M), and n_j\in N for all j. Define S=\{n_1, \ldots, n_m\}.

Then with this setup, we exploit the non-generator definition. Note that
M=\langle x_1, n_1, x_2, n_2, \ldots, x_m, n_m\rangle
= \langle S, x_1, \ldots, x_m\rangle
= \langle S, x_1, \ldots, x_{m-1}\rangle
… etc
= \langle S\rangle \subset N.

And we are done! It may have seemed a little roundabout to go through the “Frattini submodule” in developing the Jacobson radical, but it certainly pays off to have lots of definitions as we see here.

The last little bit I wanted to say was that we can define the Jacobson radical for a ring without identity. I don’t want to go through the details, but a standard trick is to define a new ring (with identity) S=\mathbb{Z}\times R with the standard addition, and then (a,b)(c,d)=(ac, ad+cb+bd). It is pretty basic to check that J(S)=\{0\}\times I where I is some ideal in R (by the fact that J(\mathbb{Z})=\{0\}). It is also just algebraic manipulation to check that I is the largest ideal in R such that for every x\in I there is a y\in I such that x+y-yx=0. This then is our definition. J(R)=\cap_{\mathfrak{I}} I where \mathfrak{I} is the collection of ideals satisfying that property.


Leave a comment

The Jacobson Radical Part II

First recall that we showed J(R)=\Phi (R), and hence is a submodule of R as a module over itself. Thus J(R) is a left ideal of R. Next recall that we showed J(R)=J(R)R, and hence is a right ideal. i.e. J(R) is a two-sided ideal.

Let’s now work towards the annihilator definition. Define an equivalence relation the set of maximal ideals of R by I ~J if there is a simple left R-module M with elements a,b\in M such that I=ann_R(a) and J=ann_R(b). We see that this is an equivalence relation, since I ~ J iff R/I and R/J are isomorphic as R-modules. Examine the module homomorphisms r\mapsto ra and r\mapsto rb to see that if I ~ J then R/I\cong M \cong R/J. Also, if R/I\cong R/J by the iso \varphi, then J=ann_R(\varphi^{-1}(1+J)), so I ~ J since \phi^{-1}(1+J), 1+J\in M with I=ann_R(1+J) and J=ann_R(\varphi^{-1}(1+J)).

Now let \mathfrak{I} be an equivalence class of maximal left ideals. I claim that \cap_{I\in\mathfrak{I}} I=ann_R(M), where M is a simple left R-module isomorphic to each R/I, for I\in\mathfrak{I}. By definition and the property above we get that \mathfrak{I}=\{ann_R(a): a\in M, \ a\neq 0\}. Thus if J\in\mathfrak{I}, then J=ann_R(1+J) which means that J=ann_R(a) where \varphi: R/J\to M satisfies \varphi(1+J)=a. But now this gives precisely cap_{I\in \mathfrak{I}}I=ann(M).

Now just intersect over all the maximal left ideals. We get \displaystyle J(R)=\cap_{J \ maximal} J=\cap_{\mathfrak{I}}\cap_{i\in \mathfrak{I}} I=\cap_{\mathfrak{I}}ann_R(R/I)=\cap_{M \ simple}ann_R(M). And voila, we have it. This was a rather terse run-through and assumed a working knowledge of some facts about modules, but I find it to be a rather fascinating take on the development.

Next we’ll exploit some of these definitions to get some properties of the Jacobson radical, and develop it in a method that doesn’t require our ring to have 1.


1 Comment

A closer look at Spec

Let’s think about what is going on in a different way. So now let’s think of f \in R elements of the ring as functions with domain Spec(R). We define the value of the function at a point in our space f(P) to be the residue class in R/P. This looks weird at first, since the image space depends on the point that you are evaluating the function.

Before worrying about that too much, let’s see if we can get this notion to match up with what we did yesterday. We have the nice property that f(P)=0 if and only if f \in P. (Remember that even though we think of f as a function, it is really an element of the ring).

Define for any subset of the ring S the zero set: Z(S)=\{P\in Spec(R): f(P)=0, \forall f \in S\}. Now from what I just noted in the previous paragraph, we get that these are just precisely the elements of Spec(R) that contain S, i.e. the closed sets of the Zariski topology. Thus we can define our basis for the Zariski topology to be the collection of D(f)=Spec(R)\setminus Z(f).

We also will want what is “an inverse” to the zero set. We want the ideal that vanishes on a subset of Spec. So given Y\subset Spec(R), define I(Y)=\{f \in R : f(P)=0, \forall P\in Y\}. Now this isn’t really an inverse, but we get close in the following sense:

If J\subset R is an ideal, then \displaystyle I(Z(J))=\sqrt{J}. Taking the ideal of the zero set is the radical of the ideal. And the radical has two equivalent definitions: \displaystyle \sqrt{J}=\cap_{P\in Spec(R), P\supset J} P=\{a\in R : \exists n\in \mathbb{N},  a^n\in J\}.

If we take the ideal and zero set in the other order we get that Z(I(Y))=\overline{Y} : the closure in the Zariski topology.

We can abstract one step further and put a sheaf on D(f). Note that for any f\in R we have that \{1, f, f^2, \ldots\} is a multiplicative set, so we can localize at it. Since I haven’t talked at all about sheaves, I’m not sure if I want to go any further with this, so maybe I’ll do some more examples next time and possibly start to scratch this surface.


1 Comment

Spec? You mean like glasses?

So I’ve built up localization starting there, and I’ve built up the theory of prime ideals scattered throughout, but ending here. I also just assume the basics of topology in my posts, so we are in the perfect position to talk about a very fascinating construction and incredibly useful tool that combines all these things.

Warning: I have just started learning about this stuff, so it could be riddled with confusion or error. Luckily, I’m just posting the basics which some readers probably know like the back of their hand and will hopefully point out problems.

Of course what I’m referring to is Spec. As usual let’s assume that R is a commutative ring with 1 (I don’t think we need the 1). Then Spec(R)=\{P : P \  prime \ ideal \ of \ R\}. So we have the collection of all (proper) prime ideals of the ring. Other than prime ideals being my favorite type of ideal, this seems to be useless right now.

Let’s put a topology on our set now (the “points” of our space are prime ideals). Let asubset R be any ideal. Define V(a)=\{\mathfrak{p}\in Spec(R) : a \ subset \ \mathfrak{p}\}. Then we define the closed sets of the topology to be the family of all such sets, i.e. \{V(a) : a \ subset \ R \ an \ ideal\} are the closed sets. This is known as the Zariski topology.

To check that these really satisfy the right axioms, (I won’t go through it, but) note that V(0)=Spec(R), V(R)=\emptyset, V(\sum a_i)=\cap V(a_i) and V(a \cap b)=V(a)\cup V(b) (The last is probably the least trivial, but they all follow in a straightforward from definition way).

Examples:

1. If our ring is a field k, then Spec(k)=\{*\} the spectrum is a point.

2.Another common example would be Spec(\mathbb{Z})=\{(0), (2), (3), (5), ldots \}. In other words, the prime ideals can just be identified with the prime number that generates them (and we have (0) as a special circumstance). So open sets are subsets of \mathbb{Z} that are missing finitely many prime numbers. So we see that the Zariski topology is not Hausdorff (and rarely is). It will, however, always be compact.

3. Possibly the most important examples are the ones dealing with polynomial rings. In the nicest case, when k is an algebraically closed field, we have that Spec(k[x])=\{*\}\cup k since the prime ideals are just multiples of linear polynomials, we have the bijection of sending any c \in k to the prime ideal generated by (x-c) (and we still have that pesky “zero” floating around that we’ll talk about later).

Last for today is that Spec is a contravariant functor from rings to topological spaces. We’ve basically done everything we need, since we see how it takes a ring object to a Top object. Also if we have a ring hom f:R \to S, then define Spec(f)=f^* : Spec(S)\to Spec(R) in the obvious way, i.e. \mathfrak{p} \mapsto f^{-1}(\mathfrak{p}).

I promised some localization and we should be able to get to that next time, but there is just so much going on here that it is nearly impossible to exhaust (well, from my perspective as a newbie to the topic).


Leave a comment

Noetherian Rings

I promised this awhile back. It seems as if the Noetherian condition is really the last major thing I need before being able to move on.

A ring is Noetherian if every ascending chain of ideals stabilizes (or “terminates”). So, this means that given any collection of ideals \{I_n\}\subset R such that I_1\subset I_2\subset I_3 \subset \cdots we have that there exists some N so that I_n=I_{n+1}=\cdots for all n>N. This condition seems very strange at first. It is known as the Ascending Chain Condition, or ACC for short, but it turns out that it is equivalent to some other things and makes sure our rings are somewhat well-behaved.

Since for the purpose of this collection of posts we only care about commutative rings, the ACC is equivalent to the condition that every ideal is finitely generated.

Proof) Suppose every ideal is finitely generated. Then let I_n be an ascending chain of ideals. Since I=\cup I_n is an ideal, it is generated by say m elements: I=<a_1, \ldots, a_m>. But each one of these elements come from some specific ideal, so suppose a_1\in I_{n_1}, \ldots, a_m\in I_{n_m}. Then just take N=\max(n_1, \ldots, n_m) and we have that the chain stabilizes after that.

For the reverse we go by contrapositive. Let I\subset R be some ideal that is not finitely generated. Then we can find a_1\in I such that <a_1>\neq I. We can also find a_2\in I\setminus <a_1> such that <a_1, a_2>\neq I. We can continue this process without termination. If it terminated at some step then, the ideal would be finitely generated. Thus we now just note that we have an ascending chain that doesn’t terminate <a_1>\subset <a_1, a_2>\subset \cdots.

It is easily seen that every PID is Noetherian. Rings tend to stay Noetherian under new constructions. The ring of polynomials (in finitely many indeterminates) and ring of power series where the coefficients come from a Noetherian ring is Noetherian. The former is known as the Hilbert Basis Theorem. Both the quotient R/I and the ring of fractions S^{-1}R are Noetherian if R is Noetherian.

But remember we want to figure out how this works with prime ideals. It turns out that prime isn’t quite what we want to get the best results, but in order to not introduce yet another type of ideal, I’ll leave this out since it won’t appear in anything I do later. So it turns out that if I\subset R an ideal and R Noetherian, every prime ideal P\supset I contains a minimal-over-I prime ideal, say P_0\supset I. This is just a standard one step application of Zorn’s Lemma.

So I think I’ve beat primality to death. Next time I’ll do a sort of “history of math” type post on Hilbert’s Zahlbericht to put into the blog carnival. This will give me some time to think of where to go from here. I’m thinking the algebraic number theory side…I just don’t want to have to build Galois theory before I do it.


3 Comments

More on Primality

I want to wrap up some loose ends on the greatness of prime ideals before moving on in the localization theme. So. Recall that we formed the ring of quotients just like you would form the field of quotients. Only this time your “denominator” can be an arbitrary multiplicative set and this construction only gets us a ring. Moreover, this ring is not necessarily local. If we do the construction on a ring R with and the multiplicative set R\setminus P where P is a prime ideal, then we do get a local ring and we call this the localization.

Definition. Unique Factorization Domain (UFD): An integral domain in which every non-zero non-unit element can be written as a product of primes. (Note that there are equivalent definitions other than this one).

Quick property: Every irreducible element is prime.

Thus, it is instructive to look at some properties of prime ideals. First off, let’s look at the special case of UFD’s. It turns out that if R is a UFD, then for a multiplicative set S, S^{-1}R is also a UFD. This mostly has to do with the fact that R\hookrightarrow S^{-1}R is an embedding and anything in S^{-1}R is associate to something in R. This makes a nice little exercise for the reader.

So what’s so special about prime ideals in UFD’s? Well every nonzero prime ideal contains a prime element.

Proof: Suppose P\neq 0 and P prime. Then there exists a\in P, a\neq 0 such that a=up_1\cdots p_n where u a unit and p_i prime. Thus u\notin P. But this means that p_1\cdots p_n\in P and since it is prime we have some p_j\in P.

Theorem: If R is not a PID, and P an ideal which is maximal with respect to the property of not being principal, then P is prime (and will always exist).

Sketch of existence: Zorn’s Lemma. The proof of this contains lots of nitty gritty element-wise computation and a weird trick, so I don’t see it as beneficial. What is beneficial is that we get this great corollary: A UFD is a PID if and only if every nonzero prime ideal is maximal.

I’ve been kind of stingy on the examples, so I’ll leave you with a pretty common example of a ring of fractions. These are usually called dyadic rational numbers. Take your ring to be \mathbb{Z}. Then take your multiplicative set to be S=\{1, 2, 2^2, 2^3, \ldots\}. Now S^{-1}\mathbb{Z} are just the rational numbers with denominator a power of 2.

More generally we can form the p-adic integers (although that term is laden with many meanings, so I hesitate to actually use it). Let R=\times_{i=1}^\infty \mathbb{Z}/p^i. Where we have the restriction a\in R iff a=(a_1, a_2, \ldots ) satisfies a_i\cong a_{i+1} \mod p^i. So  elements of the ring are sequences. (Note \mathbb{Z} embeds naturally since i\mapsto (i, i, i, \ldots) satisfies that relation). This is a ring with no zero divisors, so we can take it to be the multiplicative set and we get the field of fractions \mathbb{Q}_p. The multiplicative group has a nice breakdown as \mathbb{Q}_p^{\times}\cong p^{\mathbb{Z}}\times \mathbb{Z}_p^{\times}.

Next time: Why Noetherian is important. How primality relates to it. And possibly another example.


1 Comment

Localization 2

Let’s figure out what “local” means and see if our construction somehow makes a local ring, i.e. is a “localization.”

Local: A ring is called local if there is a unique maximal ideal. This seems like a rather silly term, but it actually makes sense when you look at how rings arise in algebraic geometry or manifold theory. We won’t go there, though.

Sadly, it turns out that S^{-1}R is not always a local ring. But this is where primality comes into play. If P\subset R is a prime ideal then S=R\setminus P is a multiplicative set. Suppose it weren’t, then there would be two elements x,y\in S such that xy\notin S, i.e. xy\in P, but this is impossible, since by definition either x\in P or y\in P. We now denote the localization of R at P, to be S^{-1}R=(R\setminus P)^{-1}R which we denote with the shorthand R_P. This does turn out to be local since by the property listed last time of the embedding \phi^{-1}(S^{-1}P)=P, so S^{-1}P=\{r/s : r\in P, \ s\notin P\} is the unique maximal ideal in R_P.

Proof: Suppose x\in R_P, then x=r/s with r\in R and s\notin P. If r\notin P, then r/s is a unit in R_P. So all nonunits are in S^{-1}P. Now if I is any ideal in R_P that contains an element r/s with r\notin P, then I=R_P. Thus every proper ideal in R_P is contained in S^{-1}P. So R_P is local with unique max ideal S^{-1}P. For notational purposes outside of this blog, people usually write the prime ideal as \mathfrak{p} and the unique maximal ideal of R_\mathfrak{p} as \mathfrak{p}R_{\mathfrak{p}}.

I guess I’ve been rather sparse on the examples. The first one that comes to mind is surely to take \mathbb{Z}=R. Then our prime ideals are just the principal ideals generated by the primes, so take P=p\mathbb{Z} for some prime p. Then \mathbb{Z}_P=\mathbb{Z}_{(p)}.

I guess the importance of prime ideals leads us to explore some properties of prime ideals that could be useful.

Property 1: If S\subset R is any multiplicative set (not containing 0) and if P\subset R\setminus S is a maximal ideal, then P is prime. Also, any ideal I\subset R\setminus S is contained in such a P. I’ll omit proving this. The first part is fiddling with things until it works and the second statement uses Zorn’s Lemma.

OK, well I thought I had some other properties, but I can’t seem to find them/think of them now. I’m not sure where I’m going next. I’ll either move on to some related things to get at this better like the nilradical, or I’ll generalize this one more time to modules and do it using the categorical construction. If anyone has suggestions on which of these paths to take, just post. You probably have a few days as I’ll get busy again.

Follow

Get every new post delivered to your Inbox.

Join 166 other followers