A Mind for Madness

Musings on art, philosophy, mathematics, and physics


Leave a comment

Divided Power Structures 2

Today we’ll do a short post on some P.D. algebra properties and constructions. Let’s start with properties of P.D. ideals. Our first proposition is that given {(I, \gamma)} and {(J, \delta)} as two P.D. ideals in {A}, then {IJ} is a sub P.D. ideal of both {I} and {J}. This is very straightforward to check using the criterion from last time, since {IJ} is generated by the set of products {xy} where {x\in I} and {y\in J}. This proposition immediately gives us that powers of P.D. ideals are sub P.D. ideals and there is a natural choice for P.D. structure on them.

Another proposition is that given two P.D. ideals as above with the additional property that {I\cap J} is a P.D. ideal of {I} and {J} and that {\gamma} and {\delta} restrict to the same thing on the intersection, then there is a unique P.D. structure on {I+J} such that {I} and {J} are sub P.D. ideals. Proving this would require developing some techniques that would lead us too far astray. We probably won’t use this one anyway. It just gives a sense of the types of constructions that are compatible with P.D. structures.

Another construction that requires no extra effort are direct limits. If {\{A_i, I_i, \gamma_i\}} is a directed system of P.D. algebras, then {\displaystyle \left(\lim_{\rightarrow} A_i, \lim_{\rightarrow} I_i\right)} has a unique P.D. structure {\gamma} such that each natural map {(A_i, I_i, \gamma_i)\rightarrow (A, I, \gamma)} is a P.D. morphism.

Unfortunately, one common construction that doesn’t work automatically is the tensor product. It works in the following specific case. If {B} and {C} are {A}-algebras, and {I\subset B} and {J\subset C} are augmentation ideals with P.D. structures {\gamma} and {\delta} respectively, then form the ideal {K=\mathrm{ker}(B\otimes C\rightarrow B/I \otimes B/J)}. We then get that {K} has a P.D. structure {\epsilon} such that {(B, I, \gamma)\rightarrow (B\otimes C, K, \epsilon)} and {(C, J, \delta)\rightarrow (B\otimes C, K, \epsilon)} are P.D. Morphisms.

Next time we’ll start thinking about how to construct compatible P.D. structures over thickenings. Since we’ll be thinking a lot about {W_m(k)} I’ll just end this post by pointing out that {(p)\subset W_m} actually has many choices of P.D. structure. But last time we said that {(p)\subset W(k)} actually has a unique one, so our convention is going to be to choice the “canonical” P.D. structure on {(p)\subset W_m} induced from the unique one in {W(k)}.


1 Comment

Divided Power Structures 1

At some point in the distant future we may want to work with Divided Power structures if I ever get around to crystalline cohomology, so why not start writing about it now? Basically this is how we are going to be able to talk about things that require division when working in positive characteristic. Today we’ll just quickly give the definition and then a bunch of easy examples.

Suppose {A} is a commutative ring and {I} an ideal. Divided powers on {I} are a collection of maps {\gamma_i: I\rightarrow A} for all integers {i\geq 0} for which we have the following five properties:

1. For all {x\in I}, {\gamma_0(x)=1}, {\gamma_1(x)=x} and {\gamma_i(x)\in I}
2. For {x, y\in I} we have {\gamma_k(x+y)=\sum_{i+j=k}\gamma_i(x)\gamma_j(y)}
3. For {\lambda\in A}, {x\in I}, we have {\gamma_k(\lambda x)=\lambda^k\gamma_k(x)}
4. For {x\in I} we have {\gamma_i(x)\gamma_j(x)=((i,j))\gamma_{i+j}(x)}, where {((i,j))=\frac{(i+j)!}{(i!)(j!)}}
5. {\gamma_p(\gamma_q(x))=C_{p,q}\gamma_{pq}(x)} where {C_{p,q}=\frac{(pq)!}{p!(q!)^p}}, the number of partitions of a set with {pq} elements into {p} subsets with {q} each.

You may have noticed that these conditions seem to just be a formal encoding of the power map {\gamma_n(x)=x^n} from characteristic {0}. You’d be wrong, but basically correct. A close examination of the fourth condition actually gives that the map must be {\gamma_n(x)=\frac{x^n}{n!}} when dividing makes sense (i.e. {A} is a {\mathbb{Q}}-algebra).

We say {(I, \gamma)} is a P.D. ideal, {(A, I, \gamma)} is a P.D. ring and {\gamma} is a P.D. structure on {I}. Of course, P.D. stands for “Divided Power”. OK, not really, it stands for “Puissances Divisees” which is just french for divided powers. We may want to form this into a category, so we’ll say a P.D. morphism {f:(A, I, \gamma)\rightarrow (B, J, \delta)} is a ring map {f:A\rightarrow B} with the property {f(I)\subset J} and it commutes with the divided powers {f\circ \gamma_n(x)=\delta_n\circ f(x)} for all {x\in I}.

We already gave the {\mathbb{Q}}-algebra example. In fact, in that case every ideal has that as its unique P.D. structure. For any ring, {\{0\}} together with {\gamma_0(0)=1} and {\gamma_i(0)=0} is a P.D. structure. The first interesting example is to let {V} be a DVR of unequal characteristic {(p,0)} with uniformizing parameter {\pi}. Recall that if {p=u\pi^e}, then {e} is the absolute ramification index of {V}. We get that {\frak{m}} has a P.D. structure if and only if {e\leq p-1}. In particular, if {k} is perfect, then since {W(k)} is absolutely unramified {pW(k)} has a unique P.D. structure on it.

We can also define subobjects in the obvious way. If {(I, \gamma)} is a P.D. ideal, then another ideal {J\subset I} is a sub P.D. ideal if {\gamma_i(x)\in J} for any {x\in J}. In other words, the P.D. structure restricts to be a P.D. structure on the smaller ideal.

Let’s end with a nice little lemma for how find sub P.D. ideals. Suppose {(A, I, \gamma)} is a P.D. algebra, {S\subset I} a subset and {J} the ideal generated by {S}. We have that {J} is a sub P.D. ideal if and only if {\gamma_n(s)\in J} for all {s\in S}.

Here is the proof. By definition if it is a sub P.D. ideal, then {\gamma_n(s)\in J}. That direction is done. Now suppose {\gamma_n(s)\in J} for all {s\in S}. Let {J'} be the subset of {J} of the {x} for which {\gamma_n(x)\in J} for all {n\geq 1}. By assumption and construction {S\subset J' \subset J}. By definition of generation, if {J'} is an ideal, then {J'=J} and we are done. Choose {x,y \in J'} and fix {n\geq 1}. Now {\gamma_n(x+y)=\sum_{i+j=n}\gamma_i(x)\gamma_j(y)\in J} since {J} is an ideal. Thus {x+y\in J'}. Lastly, suppose {\lambda\in A}, then using {J} an ideal again {\gamma_n(\lambda x)=\lambda^n\gamma_n(x)\in J}, so {\lambda x\in J'}. Thus {J'} is an ideal which proves the lemma.


1 Comment

Music 2011 Halfway

It’s already time for a halfway through the year music ranking. This hasn’t been a great year so far. On the other hand, I’ve had way more things I’ve been looking forward to than in recent years. Lots of my favorite artists have been releasing things constantly, and several more are still on my list of things to get. I also have spent a lot of time with things I’ve missed in the past. Lightspeed Champion came out with something last year that I missed which was really, really great. I also have been listening to Iceburn from the mid 90’s which is a great band. I’ve been listening to Arvo Part which might be the best composer of our time. I’ve also been listening to Nik Bartch’s Ronin which is an amazing band combining Jazz and sort of Philp Glass style minimalism. Overall, I’ve been listening to and fascinated with lots of things from the past and not sticking with a lot of stuff from this year. I’ll list this year’s stuff in order of how much I like it with a brief description of it.

Excellent:

1) James Blake: James Blake – At first I didn’t know whether this album was made by horrible musicians or so good of ones that my puny brain couldn’t comprehend the complexity. It turns out that after many, many listens the album revealed itself to be an incredibly complex and beautifully subtle work of art. Unlike everyone else on this list, James Blake is actually a composer rather than songwriter. This probably confuses people. I even saw people commenting on youtube that he needs a new drummer. No. The drumming is amazing. It is just polyrhythm more complicated than standard 2 against 3 and often long hemiolas. The average listener will probably be discouraged with this album.

2) The Dodos: No Color – This is a really great album. It is exciting, moving, and technically impressive album. I’m continually amazed at how good of musicians these guys are. They write creative songs that involve what sounds almost like a modernized bach fugue or something. They have fascinating arrangements of acoustic vs electric sounds. The excitment never lets down on this album. The melodies are beautiful. The lyrics are often moving. I can’t get enough of it.

Very Good:

3) Death Cab for Cutie: Codes and Keys – I swore off DCFC after their last album, but as with other bands I swore off for some reason this year I broke my promise. I’m glad I did. This is actually really good. I won’t say it is a return to anything, since it is fairly different from any of their previous albums. There are a few songs that don’t live up to the rest of the album, but throwing those out I absolutely love this album. To me it is what they tried to do their last album, but they did it properly this time.

4) Son Lux: We Are Rising – This is a very good album with many of the same qualities of my number one album by James Blake. It has lots of cool tempo changes and mixed meters and polyrhythms. I think the fact that it was completely written and recorded in a month shows. Possibly if more time had been spent on it this would be my number one album so far. It is still well worth getting.

5) Bright Eyes: The People’s Key – I said that I’d never ever get a Bright Eyes album again after Connor Oberst disappointed me repeatedly the past several albums. The problem is one of my favorite albums of my entire life is a Bright Eyes album (I’m Wide Awake It’s Morning). So I just can’t resist when one comes out. My first Bright Eyes album was Fevers and Mirrors. This reminds me of that a lot. The first thing that happens is just speaking. It is ultra pretentious nonsense that really turned me off of the album at first (just like F&M). Overall I’ve come to really enjoy this album once I got over the horrible speaking aspects. Really it is quite fantastic. It is a return to classic Bright Eyes.

6) Elbow: Build a Rocket Boys! – There absolutely beautiful songs that I adore on this album, but overall it is hard to motivate myself to actually listen to it due to the number of songs I don’t really care for as well.

Good:

7) Adele: 21 – She really excels at power-pop with the gospel twist. They are really fun and enjoyable to listen to. Unfortunately, it seems that when she tries to get “emotional” it really falls flat. Sometimes when someone sings a song or acts in a movie or play it is possible for the person to really do it passionately, but without really feeling it. The end result is this awkward moment. That is how I feel when she doesn’t do her awesome poppy stuff. It is sort of passionate yell-singing that really works well with her great voice for the power pop, but falls flat on the subtler songs.

8. The Decemberists: The King is Dead – It is nice. That’s all about I can say. Not boring. Not exciting. Very forgetable, but at least it isn’t bad. If I want non-distracting background music while typing up some math or something I’ll put this on. I have to say that it is really fun for the most part. This is sort of classic Decemberists harkening back to Castaways and Cutouts or Picaresque.

9) Fleet Foxes: Helplessness Blues – Yet another one where the songs I like I very much love, but I can’t get over how bored I am by the songs I don’t like. Chop the first half of this album off and you have an absolutely excellent EP. Seriously, I’d rather people release a short album of great stuff rather than fill it out with boring stuff. The Shine/An Argument is certainly the best song this band has ever produced.

Not So Good:

10) Iron and Wine: Kiss Yourself Clean – I really like the direction of Iron and Wine for this album, but overall it just doesn’t do it for me. I’m bored by most of it. Next time keep in this direction, just do it a little more and I’ll be happy. I have to say that Your Fake Name is Good Enough for Me is one of my favorite songs of the year. I could listen to it forever.

11) The Antlers: Burst Apart – The Antlers went from my number 1 album a few years ago to one of my least favorite of this year. Just like the previous one, this one is sort of tedious and boring to listen to at first. Unlike the last one after many, many repeated listens in an attempt to get it to reveal itself as a beautiful moving album it just never did. This is a huge disappointment.

12) Wye Oak: Civilian – I was told this would be like Neko Case. It wasn’t. I don’t really like it. There are a few songs I like, but not much.

13) Radiohead: The King of Limbs – Um. I think Radiohead really messed up here. It is very blah. Nothing very interesting. I’ve given a ton of listens hoping to change my mind, but I guess this goes down in history as the only Radiohead album I don’t like.

14) Cold War Kids: Mine Is Yours – I think this is a horrible album. It is just ordinary boring rock. Nothing makes it stand out. These guys used to be amazing back when they were doing original stuff like Hang Me Up To Dry with their awesome dirty loose sound and passionate vocals. They dared to do a guitar solo on that song that consisted of playing tritones over and over. Bring that back please. Admittedly there were a few good songs on here, but not enough to make it worth it.


Leave a comment

Nondegenerate Hodge de Rham

Let’s construct the example today as a quick post. First, we’ll need a theorem that will be used to show that HdR doesn’t degenerate. Let {Z} be a smooth variety over an algebraically closed field of positive characteristic with the property that {\mathrm{Pic}^\tau(Z)\simeq \alpha_p} as a group scheme. Then HdR does not degenerate for {Z}. We’ll use a lot of things we haven’t talked about or proved, but the purpose of this post is to give the example and a flavor of the why it is true. Later we’ll come back and look at the parts that go into it more carefully.

Here is how this is proved. It is well-known that for HdR to degenerate it must be the case that all global {1}-forms are closed. So we assume this is the case otherwise we are done. Now Oda has a theorem that says when we are in this case {H^1_{dR}(Z/k)\simeq D(pPic^\tau(Z))} where {D(-)} is the Dieudonne module. By assumption that {\mathrm{Pic}^\tau(Z)\simeq \alpha_p} we get that the first de Rham Betti number {h^1_{dR}=1}.

Since {H^1(Z, \mathcal{O}_Z)} is isomorphic to the tangent space to the Picard scheme we get that {1=\dim H^1(Z, \mathcal{O}_Z)}, but also {\alpha_p} gives a global {1}-form on the Picard scheme, so {\dim H^0(Z, \Omega^1)\geq 1}. Thus we have a contradiction if HdR degenerates and hence this does not happen.

This gives our example if we can come up with something that satisfies those properties. Let {R} be the extension of {W(k)} of ramification index {2}. Let {S=\textrm{Spec}(R)} and {G} be a finite flat group scheme over {S} such that {G_0\simeq \alpha_p}. It is a theorem of Tate and Oort that if there are elements {a} and {c} in {\frak{m}_R} such that {ac=p}, then such a {G} exists.

Now it is a theorem of Raynaud that there is a projective space {P} over {S} with a linear action of {G} which contains a relative complete intersection surface {Y} which is stabilized by {G} and such that {G} acts freely on {Y} and {Y=X/G} is smooth over {S}. It follows that {\textrm{Pic}^\tau(X_0)\simeq G_0^D\simeq \alpha_p}. Thus for any characteristic we have an example of a smooth variety that lifts to characteristic {0} over a very small ramified extension of {W(k)} but the HdR spectral sequence does not degenerate! We’ll try to unpack this better next time.


Leave a comment

Postscript to Multiplication is Repeated Addition

I swear I’ll go back to math after this post, but I honestly wanted to better understand what the fuss over multiplication being repeated addition was all about. I looked up the research that Devlin had quoted when he was talking about how multiplication is not repeated addition. I read chapter 7 of Nunes and Bryant Children Doing Mathematics.

Honestly, I can breathe a sigh of relief. I now understand what all the fuss is about and no one actually seems to believe that multiplication is not repeated addition (which is good since it is!). When they say that they are actually referring to a few other things which I’ll soon elaborate on. The misunderstanding comes from people not being precise with their language and saying something they don’t actually mean.

Let me clear up some of the things I’m saying versus some of the things I’m not saying. Recall that all of this came about because of questioning whether or not we should teach children that multiplication is repeated addition or if we should use some other concept like “scaling”. First off, multiplication is repeated addition (and I think most people actually agree with this regardless of what they say), so under no circumstances should we ever remove that from our education of children. What I’m not saying is that this is the only concept we should teach. Obviously when teaching anything you should present as many different viewpoints as possible to try to get the best understanding. The idea of scaling is one such other viewpoint (which I’ve pointed out is by definition repeated addition).

After very carefully reading what Nunes and Bryant have to say I think they are perfectly fine with multiplication being repeated addition. There seems to be two different ways they use the concept that “multiplication is not repeated addition.” First, they use it to mean that certain real world situations for which using multiplication is quickest often have more complicated structure than where addition is quick and easy. They continually use the term “multiplicative situation”. It seems to me that they aren’t at all concerned with what multiplication is but how to get children to convert real world situations into symbols for which they can then do the multiplication.

If you have a group of children and a pile of candy that you distribute evenly among the children, then complicated ideas happen like increasing the amount of candy increases the amount that each child will get while increasing the number of children decreases the amount each get. But again, saying that this shows that multiplication is not repeated addition is just misusing that phrase. It doesn’t show that at all (because it’s not true). The complexity of a real world situation has no bearing on the actual operation of multiplication. It seems to me that this is the easy to detect misuse of that phrase.

The slightly more subtle way it is misused is when talking about things like “scaling”. I quote from the book, “In multiplicative situations where the relationship between two variables is concerned, a new number meaning emerges, a factor, function, or an intensive quantity connecting the two variables.” Can I just emphasize something here? Notice the phrase new number meaning. Here it is folks. I seem to have pinpointed the whole misunderstanding. The problem isn’t at all with multiplication being confusing. The confusing thing is that numbers are absurdly abstract entities (even for adults).

Before this point in their education children haven’t had to deal with numbers as abstract entities. It was always 3 apples or 5 pieces of candy. A scaling factor is unitless. The hard conceptual part about this stage of their education is that these abstract entities can take on many different meanings all in the same problem. This has nothing to do with multiplication (because as an operation it is just repeated addition). It has to do with the concept of what a number is. It only looks like multiplication is causing the confusion because this is the first place the concept of a number as something other than a concrete number of objects appears. If our curriculum was slightly different and some other math problem introduced a new number meaning before multiplication it would look like that was the cause as well when it is really the concept of number itself that is the cause.

Now that we’ve pinned this down it seems that when people say “multiplication is not repeated addition” what they really mean has nothing to do with the true meaning of that phrase. What they mean is that when they give children a word problem associated to the operation of multiplication it involves more complicated ideas than the previous word problems dealing only with addition. The other thing they seem to mean is that confusion arises when children have to deal with a more abstract notion of number. All of this makes me feel better that it seems that no one is making the false statement that “multiplication is not repeated addition” and intending for the actual meaning of that statement to be true. The thing that worries me is that this strange stock phrase is being used when much better more precise phrases could be used that convey the actual problem. This phrase seems to make people believe the problem is with multiplication itself rather than with the true sources of difficulty.


4 Comments

Multiplication is Repeated Addition

I’ll return to the planned series after this one side post. Keith Devlin at Devlin’s Angle has written many times about how multiplication is not repeated addition and how he thinks it is a detriment to teach children this falsehood. I strongly suggest going and reading all of his articles before reading this. He makes the analogy that he can take a car or a bike to work. The end result happens to be the same, but we shouldn’t pretend like the process is the same. I agree. Just because two processes happen to produce the same results does not mean we can conclude that they are the same process. It is a false analogy as I will show. Repeated addition and multiplication don’t just happen to come out with the same answer. They come out with the same answer because they are the same process.

Before we begin I’ll say a few words about the research done on this topic. I haven’t read it yet, so I can’t comment on whether or not I think the results are valid (it is amazing how much bias … yes, even among professional research scientists … happens in experiments involving observing and quantifying how well children understand things). If teaching children in a certain way increases understanding, then by all means go ahead. I’m no expert on the subject. The whole idea of how to teach a concept seems to have nothing to do with whether or not multiplication is repeated addition and that is what I’m addressing.

First, I agree with Devlin on one point. We can abstractly define multiplication to be a basic operation that is separate from addition. He never really makes this precise, so let’s do that first so we see exactly why they are different operations and why he is saying we confuse the two.

Let’s say we have a set that we’ll call S=\{\ldots , a_{-2}, a_{-1}, a_0, a_1, a_2, \ldots \}. The sole purpose of this is because our notation is causing the problem. We should just think that a_j is the number j. We can define an addition and a multiplication as abstract arbitrary rules that satisfy certain axioms. Note that the multiplication a priori has absolutely nothing to do with the addition (and even a posteriori in most cases). We’ll say the addition is a_i+a_j and the multiplication is a_i\times a_j. Yes, we’re going to use that large clumpy symbol to try to avoid future confusion.

Despite the multiplication being completely arbitrary, notice that we can still define something that we’ll call “additive multiplication”. We’ll denote this with a dot. In other words, 3\cdot a_j=a_j+a_j+a_j. No matter what integer we pick we can define this additive multiplication. There is no reason to stop there. We can define \frac{1}{3}\cdot a_j to be some element of the set a_k with the property that a_k+a_k+a_k=a_j. We can do one at a time \frac{5}{2}\cdot a_j just means \frac{1}{2}\cdot (5\cdot a_j) or maybe even better it is adding a_j two and a half times: a_j+a_j + a_j/2. We can extend this to the negatives. I won’t worry about the interpretation of this because Devlin admits negatives pose problems in any interpretation. In algebra we’d call this giving our set with the addition a \mathbb{Q}-module structure since we’ve defined a multiplication \mathbb{Q}\times S\to S. It doesn’t ever even mention the multiplication, so indeed it is completely separate.

Let’s cautiously reintroduce old notation. We have these two arbitrary operations on the integers: addition 2+3=5 and multiplication 2\times 3=6. As was pointed out already we also have this other thing 2\cdot 3=6 and it is very good of Devlin to point out that this other thing need not agree with the multiplication rule we’ve defined. The reason we don’t notice that they are two different things is because when we do the dot and think of repeated addition we specify with the symbol 3 that we want to repeat addition three times. When we do multiplication 3\times 2 we’ve reused the same symbol in a different way. Namely, that 3 is just an element of the set on which we defined multiplication which we could be notating as say a_3 to keep things separate. I applaud Devlin at this point for pointing out the conflation of these two separate things (although he never told us this was the real reason for the confusion).

Let’s start pointing out some issues with what Devlin says. First off, it is mentioned several times in every single one of his articles that the concept of multiplying by repeated addition only works for positive whole numbers. Hold on just a second. Now that we’ve figured out what is really going on we see this is not true at all. Our set could be anything, say S is the real numbers. We still have this \mathbb{Q}-module structure on it. So it makes perfect sense in the repeated addition paradigm to do say \frac{3}{2}\cdot(-\sqrt{2}). This is a far cry from only positive integers. We just used rational numbers, irrational numbers, and negatives and didn’t run into any problems. In fact, even the alternative proposed by Devlin will have issues with real numbers and the way it is overcome is exactly the same way it can be overcome in this situation, but we won’t get into that here.

The second major point is what Devlin wants to replace the paradigm with. He thinks it should be replaced with the idea of “scaling”. Let’s look at the example that he thinks proves his point. I’ll just quote verbatim the relevant section:

Take a piece of elastic, and tie two knots in it, one near each end. Ask the child to measure the distance between the knots. Suppose it turns out to be 5 inches. Now, as the child watches, slowly stretch the elastic until the distance between the two knots is 10 inches. Get the child to measure it again. Now ask the child to write down a mathematical description of how the new length depends on (is related to) the original length. I would hope the child writes down

10 = 2 x 5
In more general terms, what you did was double the length, or, as an equation:
new length = 2 x old length
I would be very surprised if the child wrote down
10 = 5 + 5
corresponding to
new length = old length + old length
and if he or she did, you would have your work cut out trying to put them right before they have serious trouble in the math class. Sure, these addition equations are numerically correct. But so what? What you have just shown the child when you stretched the elastic has absolutely nothing to do with addition and everything to do with multiplication.

What horrifies me here is that this little example actually shows exactly the opposite of what he claims. He thinks if the child says the new length is the old length plus the old length they are in serious trouble, but I’ll rephrase the example to show that if they think in ANY other terms they will be in big trouble and this way is actually the correct way of thinking.

What exactly does Devlin mean when he uses the word “double”? Really. Seriously. I want to know. Here is what I mean when I use it. The number 5 is causing problems again because it gives us something tangible to apply our arbitrary multiplication rule 2\times 5 to. Take a piece of elastic. Take a stick of unknown length. Tie knots at where the ends of the stick are. Stretch the elastic. When using the stick to see how far the elastic is stretched you find that you use the stick exactly twice. In other words, the new length is the old length plus the old length.

This isn’t some strange rhetorical trick. Now that we can’t accidentally apply our multiplication rule we see that when we use the word double the only possible meaning it can have is 2\cdot(length). The dot operation is what we mean. Now we’ve come to the amazing part of all of this. Devlin wants us to define the \times multiplication to mean “scaling” which is a priori completely separate from the “additive multiplication” of the dot. But when we go to define what we even mean by scaling we find that we are taking as the definition to be repeated addition. This is why the two agree! They are actually the same! If the child tries to tell you that the elastic is 2\times(length) in which he means the length is an arbitrary multiplication rule, then obviously the child is going to be in way more trouble because he has completely missed the point.

Devlin seems to have worked out that in abstract math land the “additive multiplication” and the times multiplication are very often different. From this he seems to be concluding that because of this they are different for the number systems we teach children. This is not correct. In principle, they could be different, but in reality the dot defines a perfectly valid multiplication and we take that to be the definition (even abstractly). They don’t just happen to give the same answer. They are the same. Even if we take multiplication to mean “scaling” when we go to figure out what we mean by that word we come back to repeated addition.


Leave a comment

When does Hodge-de Rham Degenerate?

I wrote up half of the follow up to the last post almost immediately, but then got really stuck. I wasn’t sure how to avoid the technical details and still have it be a worthwhile post. I’ve tried twice to do it, and now I’ve decided it just isn’t worth going down that road right now. Instead I’ve thought of something else to do which I find quite amazing and shocking. It also fits in really well with the past series of posts. In this post I’ll overview the example and why it is shocking. Next time I’ll give the construction. Then we’ll end with a closer examination of what is going on and why it shouldn’t be shocking that it exists.

Let’s recall some facts and theorems about the Hodge-de Rham spectral sequence. We’ve thought quite a bit about this over the past six months or so. Let {X} be a smooth projective variety over a field of characteristic {0}. Deligne and Illusie came up with a pretty crazy way to prove that HdR degenerates at {E_1}. First off, degeneration is somewhat rare in positive characteristic, but they decided to reduce mod {p} and prove that if {Y} is a scheme over a perfect field of characteristic bigger than {\dim Y} and {Y} admits a lift to {W_2(k)}, then HdR SS degenerates at {E_1}. Since this always happens for varieties that came from characteristic {0}, it is true for {Y}. To finish it off we can extrapolate back.

The moral of this story is that we have degeneration in characteristic {0}, and we have degeneration in positive characteristic if we have even just a single step in trying to lift to characteristic {0}. Note that there are varieties that lift to {W_2(k)} but do not lift to {W_3(k)}, and there are varieties that admit a formal lift by being able to lift each stage from {W_n(k)} to {W_{n+1}(k)} all the way up and then don’t algebraize to get an actual lift to characteristic {0}. Both of these types of varieties have HdR degenerate at {E_1}! They don’t have to lift to get the degeneration. In some sense they merely need to exhibit some sort of “evidence” that a lift is possible.

Lastly, we have a theorem that relates degeneration of HdR of the generic and closed fibers in a flat family over a DVR. It says that if {\frak{X}\rightarrow S=\mathrm{Spec}(R)} is a flat family over a DVR and {R/m\simeq k} is of characteristic {p} and {\mathrm{Frac}(R)=K} is of characteristic {0}, then if {\dim H^q(X_0, \Omega^p)=\dim H^q(X_\eta, \Omega^p)} then HdR degenerates at {E_1} for {X_0}. In other words, if we have a lift of {X_0} to characteristic {0} (over ANY ring, not necessarily {W(k)}), and the Hodge numbers match the lifted Hodge numbers this guarantees degeneration. This argument is quite simple and just involves counting dimensions and upper-semicontinuity, so we actually didn’t need characteristic {0}. It can be generalized to say if HdR degenerates on the generic fiber (of whatever characteristic) then it degenerates on the special fiber.

Let’s just recap here. HdR degenerates for characteristic {0} things. Things that merely exhibit evidence of a lift to characteristic {0} by lifting to {W_2(k)} have HdR degenerate (this includes things that provably can’t be lifted all the way to characteristic {0}). We can relate degeneration of HdR on the generic fiber to degeneration on the special fiber of a flat family. One might make the guess that if you have an actual honest lift to characteristic {0} (rather than just “evidence”) that HdR must degenerate. You may even think you have a proof by just saying lift it {\frak{X}\rightarrow S} and now since the generic fiber degenerates the special fiber must also.

You would be wrong. William Lang came up with an example of a smooth variety (over any characteristic that you want) that lifts to characteristic {0} but has non-degenerate HdR spectral sequence. This should be sort of shocking in light of the above theorems and “proof” outline. I’m going to partially give away the punchline now. There are two things going on here. First, we needed to be able to say that the Hodge numbers matched up after doing the lifting to say that degeneration of one implied degeneration of the other. This example will not have that property. The other thing going on is that it seems that degeneration of HdR somehow really has something to do with liftability to {W_2(k)} and is not something that is merely about characteristic {0}. This example will be a lift over a ramified (of degree {2}, so it is as close to {W(k)} as possible) extension of {W(k)}.

Follow

Get every new post delivered to your Inbox.

Join 198 other followers