Divided Power Structures 2

Today we’ll do a short post on some P.D. algebra properties and constructions. Let’s start with properties of P.D. ideals. Our first proposition is that given {(I, \gamma)} and {(J, \delta)} as two P.D. ideals in {A}, then {IJ} is a sub P.D. ideal of both {I} and {J}. This is very straightforward to check using the criterion from last time, since {IJ} is generated by the set of products {xy} where {x\in I} and {y\in J}. This proposition immediately gives us that powers of P.D. ideals are sub P.D. ideals and there is a natural choice for P.D. structure on them.

Another proposition is that given two P.D. ideals as above with the additional property that {I\cap J} is a P.D. ideal of {I} and {J} and that {\gamma} and {\delta} restrict to the same thing on the intersection, then there is a unique P.D. structure on {I+J} such that {I} and {J} are sub P.D. ideals. Proving this would require developing some techniques that would lead us too far astray. We probably won’t use this one anyway. It just gives a sense of the types of constructions that are compatible with P.D. structures.

Another construction that requires no extra effort are direct limits. If {\{A_i, I_i, \gamma_i\}} is a directed system of P.D. algebras, then {\displaystyle \left(\lim_{\rightarrow} A_i, \lim_{\rightarrow} I_i\right)} has a unique P.D. structure {\gamma} such that each natural map {(A_i, I_i, \gamma_i)\rightarrow (A, I, \gamma)} is a P.D. morphism.

Unfortunately, one common construction that doesn’t work automatically is the tensor product. It works in the following specific case. If {B} and {C} are {A}-algebras, and {I\subset B} and {J\subset C} are augmentation ideals with P.D. structures {\gamma} and {\delta} respectively, then form the ideal {K=\mathrm{ker}(B\otimes C\rightarrow B/I \otimes B/J)}. We then get that {K} has a P.D. structure {\epsilon} such that {(B, I, \gamma)\rightarrow (B\otimes C, K, \epsilon)} and {(C, J, \delta)\rightarrow (B\otimes C, K, \epsilon)} are P.D. Morphisms.

Next time we’ll start thinking about how to construct compatible P.D. structures over thickenings. Since we’ll be thinking a lot about {W_m(k)} I’ll just end this post by pointing out that {(p)\subset W_m} actually has many choices of P.D. structure. But last time we said that {(p)\subset W(k)} actually has a unique one, so our convention is going to be to choice the “canonical” P.D. structure on {(p)\subset W_m} induced from the unique one in {W(k)}.


Other forms of Witt vectors

Today we’ll discuss two other flavors of the ring of Witt vectors, which have some pretty neat applications to computing Cartier duals of group schemes. The ring we’ve constructed, {W(A)}, is sometimes called the ring of generalized Witt vectors. You can construct a similar ring associated to a prime, {p}.

Recall that the functor {W} was the unique functor {\mathrm{Ring}\rightarrow\mathrm{Ring}} that satisfies {W(A)=\{(a_1, a_2, \ldots): a_j\in A\}} as a set and for {\phi:A\rightarrow B} a ring map we get {W(\phi)(a_1, a_2, \ldots )=(\phi(a_1), \phi(a_2), \ldots)} and the previously defined {w_n: W(A)\rightarrow A} is a functorial homomorphism.

We can similarly define the Witt vectors over {A} associated to a prime {p} as follows. Define {W_{p^\infty}} to be the unique functor {\mathrm{Ring}\rightarrow \mathrm{Ring}} satisfying the following properties: {W_{p^\infty}(A)=\{(a_0, a_1, \ldots): a_j\in A\}} and {W_{p^\infty}(\phi)(a_0, a_1, \ldots )=(\phi(a_0), \phi(a_1), \ldots)} for any ring map {\phi: A\rightarrow B}. Now let {w_{p^n}(a_0, a_1, \ldots)=a_0^{p^n}+pa_1^{p^{n-1}}+\cdots + p^na_n}, then {W_{p^\infty}} also has to satisfy the property that {w_{p^n}: W(A)\rightarrow A} is a functorial homomorphism.

Basically we can think of {W_{p^\infty}} as the generalized Witt vectors where we’ve relabelled so that our indexing is actually {(a_{p^0}, a_{p^1}, a_{p^2}, \ldots)} in which case the {w_n} are the {w_{p^n}}. There is a much more precise way to relate these using the Artin-Hasse map and the natural transformation {\epsilon_p: W(-)\rightarrow W_{p^\infty}(-)} which maps {\epsilon_p (a_1, a_2, \ldots)\mapsto (a_{p^0}, a_{p^1}, a_{p^2}, \ldots)}.

Notice that when we defined {W(A)} using those formulas (and hence also {W_{p^\infty}(A)}) the definition of adding, multiplying, and additive inverse were defined for the first {t} components using only polynomials involving the first {t} components.

Define {W_t(A)} to be the set of length {t} “vectors” with elements in {A}, i.e. the set {\{(a_0, a_1, \ldots, a_{t-1}): a_j\in A\}}. The same definitions for multiplying and adding the generalized Witt vectors are well-defined and turns this set into a ring for the same exact reason. We also get for free that the truncation map {W(A)\rightarrow W_t(A)} by {(a_0, a_1, \ldots)\mapsto (a_0, a_1, \ldots, a_{t-1})} is a ring homomorphism.

For instance, we just get that {W_1(A)\simeq A}. These form an obvious inverse system {W_n(A)\rightarrow W_m(A)} by projection when {m|n} and we get that {W(A)\simeq \lim W_t(A)} and that {W_{p^\infty}(A)\simeq \lim W_{p^t}(A)}.

Today we’ll end with a sketch of a proof that {W_{p^\infty}(\mathbb{F}_p)\simeq \mathbb{Z}_p}. Most of these steps are quite non-trivial, but after next time when we talk about valuations, we’ll be able to prove much better results and this will fall out as a consequence of one of them.

Consider the one-dimensional formal group law over {\mathbb{Z}} defined by {F(x,y)=f^{-1}(f(x)+f(y))} where {f(x)=x+p^{-1}x^p+p^{-2}x^{p^2}+\cdots}. Then for {\gamma(t)\in \mathcal{C}(F; \mathbb{Z}_p)} (the honest group of power series with no constant term defined from the group law considered on {\mathbb{Z}_p}), there is a special subcollection {\mathcal{C}_p(F; \mathbb{Z}_p)} called the {p}-typical curves, which just means that {\mathbf{f}_q\gamma(t)=0} for {q\neq p} where {\mathbf{f}_q} is the frobenius operator.

Now one can define a bijection {E:\mathbb{Z}_p^{\mathbb{N}\cup \{0\}}\rightarrow \mathcal{C}_p(F;\mathbb{Z}_p)}. This can be written explicitly by {(a_0, a_1, \ldots)\mapsto \sum a_it^{p^i}} and moreover we get {w_{p^n}^FE=w_{p^n}} where {w_{p^n}^F(\gamma(t))=p^n}(coefficient of {t^{p^n}} in {f(\gamma(t))}). Now we put a commutative ring structure on {\mathcal{C}_p(F;\mathbb{Z}_p)} compatible with the already existing group structure and having unit element {\gamma_0(t)=t}.

There is a ring map {\Delta: \mathbb{Z}_p\rightarrow \mathcal{C}_p(F; \mathbb{Z}_p)} defined by {\Delta(a)=f^{-1}(af(t))}. Also, the canonical projection {\mathbb{Z}_p\rightarrow \mathbb{F}_p} induces a map {\rho: \mathcal{C}_p(F;\mathbb{Z}_p)\rightarrow \mathcal{C}_p(F; \mathbb{F}_p)}. It turns out you can check that the compostion {\rho\circ \Delta} is an isomorphism, which in turn gives the isomorphism {\mathbb{Z}_p\stackrel{\sim}{\rightarrow} W_{p^\infty}(\mathbb{F}_p)}.

Likewise, we can also show that {W_{p^\infty}(\mathbb{F}_p^n)} is the unique unramified degree {n} extension of {\mathbb{Z}_p}.

Formal Witt Vectors

Last time we checked that our explicit construction of the ring of Witt vectors was a ring, but in the proof we noted that {W} actually was a functor {\mathrm{Ring}\rightarrow\mathrm{Ring}}. In fact, since it exists and is the unique functor that has the three properties we listed, we could have just defined the ring of Witt vectors over {A} to be {W(A)}.

We also said that {W} was representable, and this is just because {W(A)=Hom(\mathbb{Z}[x_1, x_2, \ldots ], A)}. We can use our {\Sigma_i} to define a (co)commutative Hopf algebra structure on {\mathbb{Z}[x_1, x_2, \ldots]}.

For instance, define the comultiplication {\mathbb{Z}[x_1, x_2, \ldots ]\rightarrow \mathbb{Z}[x_1,x_2,\ldots]\otimes \mathbb{Z}[x_1,x_2,\ldots]} by {x_i\mapsto \Sigma_i(x_1\otimes 1, \ldots , x_i\otimes 1, 1\otimes x_1, \ldots 1\otimes x_i)}.

Since this is a Hopf algebra we get that {W=\mathrm{Spec}(\mathbb{Z}[x_1,x_2,\ldots])} is an affine group scheme. The {A}-valued points on this group scheme are by construction the elements of {W(A)}. In some sense we have this “universal” group scheme keeping track of all of the rings of Witt vectors.

Another thing we could notice is that {\Sigma_1(X,Y)}, {\Sigma_2(X,Y)}, {\ldots} are polynomials and hence power series. If we go through the tedious (yet straightfoward since it is just Witt addition) details of checking, we will find that they satisfy all the axioms of being an infinite-dimensional formal group law. We will write this formal group law as {\widehat{W}(X,Y)} and {\widehat{W}} as the associated formal group.

Next time we’ll start thinking about the length {n} formal group law of Witt vectors (truncated Witt vectors).

Witt Vectors Form a Ring

Today we’ll check that the ring of Witt vectors is actually a ring. Let {A} be a ring, then {W(A)} as a set is the collection of infinite sequences of {A}. Recall that our construction involves lots of various polynomials and a strange definition addition and multiplication. I won’t rewrite those, since it was the entirety of the last post.

Now there is a nice trick to prove that {W(A)} is a ring when {A} is a {\mathbb{Q}}-algebra. Just define {\psi: W(A)\rightarrow A^\mathbb{N}} by {(a_1, a_2, \ldots) \mapsto (w_1(a), w_2(a), \ldots)}. This is a bijection and the addition and multiplication is taken to component-wise addition and multiplication, so since this is the standard ring structure we know {W(A)} is a ring. Also, {w(0,0,\ldots)=(0,0,\ldots)}, so {(0,0,\ldots)} is the additive identity, {W(1,0,0,\ldots)=(1,1,1,\ldots)} which shows {(1,0,0,\ldots)} is the multiplicative identity, and {w(\iota_1(a), \iota_2(a), \ldots)=(-a_1, -a_2, \ldots)}, so we see {(\iota_1(a), \iota_2(a), \ldots)} is the additive inverse.

We can actually get this idea to work for any characteristic {0} ring by considering the embedding {A\rightarrow A\otimes\mathbb{Q}}. We have an induced injective map {W(A)\rightarrow W(A\otimes\mathbb{Q})}. The addition and multiplication is defined by polynomials over {\mathbb{Z}}, so these operations are preserved upon tensoring with {\mathbb{Q}}. We just proved above that {W(A\otimes\mathbb{Q})} is a ring, so since {(0,0,\ldots)\mapsto (0,0,\ldots)} and {(1,0,0,\ldots)\mapsto (1,0,0,\ldots)} and the map preserves inverses we get that the image of the embedding {W(A)\rightarrow W(A\otimes \mathbb{Q})} is a subring and hence {W(A)} is a ring.

Lastly, we need to prove this for positive characteristic rings. Choose a characteristic {0} ring that surjects onto {A}, say {B\rightarrow A}. Then since the induced map again preserves everything and {W(B)\rightarrow W(A)} is surjective, the image is a ring and hence {W(A)} is a ring.

So where does all this formal group stuff we started with come into play? Well, notice that what we were really implicitly using is that {W:\mathbf{Ring}\rightarrow\mathbf{Ring}} is a functor. It takes a ring {A} and gives a new ring {W(A)}. If {\phi: A\rightarrow B} is a ring map, then {W(\phi): W(A)\rightarrow W(B)} by {(a_1, a_2, \ldots)\mapsto (\phi(a_1), \phi(a_2), \ldots)} is still a ring map. We also have {w_n:W(A)\rightarrow A} by {a\mapsto w_n(a)} are ring maps for all {n}.

Some people think it is cleaner to define the ring of Witt vectors as the unique functor {W} that satisfies these three properties. From a functorial point of view it turns out that {W} is representable. The representing ring via the ring axioms gives a Hopf algebra structure, and hence we get an affine group scheme out of it. Then as in the formal group discussion, we can complete this to get a formal group. This will be the discussion of next time.

Witt Rings 1

Here is just a quick post on why one might want to know what the ring of Witt vectors is. I won’t tell you what they are, but here are some interesting ways in which they are used. It is hard to find any resources on their construction, so we’ll try to get some information out there.

Given a ring, you can construct the Witt ring from it. For fields of positive characteristic, this ring {W(k)} has some very nice properties. It is a DVR with residue field {k} and fraction field of characteristic {0}. I’m very interested in a class of problems in algebraic geometry known as “lifting problems”. One wants to know if a particular variety defined over a positive characteristic field has a lift to characteristic {0}.

What this means is that you have a deformation of the variety where the special fiber is the variety itself, but another fiber is of characteristic {0}. This probably hurts your brain if you are used to thinking of deformations as “continuously” changing a variety, but recall all it really means is that you have a flat family.

Here is where the Witt vectors shine. Suppose you are trying to lift a variety {Y} to characteristic {0}. Then you might try to find an {X} and a flat map {X\rightarrow \text{Spec}(W(k))} with the property that the fiber over the closed point is {Y}. Then you’ve lifted it, since the generic fiber is a deformation {Y} and is defined over {Frac(W(k))} which is of characteristic {0}. Note that finding such an {X} is usually very difficult and often requires constructing a formal scheme one step at a time and proving that this is algebraizable, but now we’re getting ahead of ourselves.

Now I’ll just list some other applications that we won’t focus on, but hopefully something catches your interest so that you’ll want to find out what they are. The last few posts were about de Rham cohomology in arbitrary characteristic, and we have our eye towards crystalline cohomology. Number theorists care a lot about crystalline cohomology since it is central in all of this Langland’s stuff going on. The tie in with Witt vectors is in the de Rham-Witt complex.

Witt rings show up in K-theory in the form of {K_0} of the category of endomorphisms of projective modules over a commutative ring. Lastly, any unipotent abelian connected algebraic group is isogenous to a product of truncated Witt group schemes.

I’m sure there are lots of other examples where these things come up. “If there so important why has no one really heard of them,” you may be asking? I have no idea. I wish there was more out there on them so that it was easier to learn what they are. I think it has to do with the fact that for the most part you have to write down a really gross formula for the multiplication.