The main purpose of my math blogging this summer was to solidify things that I felt a little uncomfortable with that could appear on my qualifying exam in Sept. Well, I’ve sort of limited this to algebra so far, and we’ve gotten far enough that it is sort of out of the scope of the test. In fact, I’m feeling fairly comfortable with the core material of both algebra and complex analysis. So that leaves me with manifolds to blog about.

The problem with manifolds is that to develop things properly takes quite a bit of tedious detail, so I want to go through the Frobenius theorem starting with what tensors are. So I have to start somewhere reasonable and have to skip lots of things. All of a sudden, it seems almost pointless to do the blog thing on it if I’m just going to skip the mechanics that I have to make sure I understand.

So I’m going to try my best to develop things rigorously and carefully, but I’ll probably only fill in things that seem like standard arguments that might come up several times and hence might come up on the exam.

My first assumption is a strong one. I’m working off the assumption that people who want to read this series either know what the tensor product of finite-dimensional vector spaces is or can quickly look it up and figure it out.

Let V be a finite-dimensional vector space. Then a covariant k-tensor is an element of (* will denote the dual space). For our purposes, we really only care about covariant tensors (I won’t go into connections or anything), so I won’t state that anymore, and we denote the vector space of k-tensors .

We have lots of examples of these already. , since linear functionals are in particular multi-linear. Thus 1-tensors are just covectors. In the language of linear algebra 2-tensors are just bilinear forms. Thus inner products are 2-tensors. Lastly, the determinant is an n-tensor.

Given a choice of basis for V, we have a natural basis for these vector spaces. Let be a basis for V, then take the dual basis for (the set such that ). Then is a basis for . In particular, we get that .

The types of tensors we really care about are ones called alternating tensors. These are the ones that change sign when two of the arguments are interchanged. Suppose is a (covariant, last time I promise) k-tensor on V and for any collection , it is alternating if .

The set of all alternating k-tensors is a subspace of the k-tensors that we will denote , and we will call this subspace the space of k-covectors. Note that the alternating property actually gives us a way to calculate any permutation of the arguments: .

Now that that is out of the way, we will place these structures on manifolds next time in the form of tensor fields.

August 9, 2009 at 8:33 am

When I first learned about Stokes theorem on manifolds, in Spivak’s _Calculus on Manifolds,_ it used alternating tensors to define differential forms. Yet as far as I could understand, the real reason for choosing this seemingly odd notation was to have the correct transformation laws with respect ot the change of variables formula.

So to define differential forms, wouldn’t it be enough to take sections of the -th exterior power of the cotangent bundle? If this works (maybe I’ve actually seen it done this way, but then I’ve forgotten), this allows one to think in terms of quotient spaces rather than subspaces, because the alternating product’s universal property is more of a quotient space property than a subspace property.

August 9, 2009 at 9:41 am

Ha. This was exactly how I was going to proceed (taking sections of the exterior power of the cotangent bundle).

As for the quotient space definition, I swear this is how I learned it. But when I looked it up in the book I learned from, it was done in this subspace way. So I was sort of shocked when I saw that. My guess is that my professor did it through the universal property and ignored the book.

August 10, 2009 at 9:56 am

Interesting. I was looking at a book on complex manifolds after I posted the previous comment, and it defined differential forms that way (and it did not cover integration).

It does seem like a much more natural approach, since e.g. otherwise one has to define the wedge product in a rather ugly manner using factorials, which seem to obscure and complicate the subject.