One-Parameter Subgroups

It is really down to the wire with less than 2 weeks until prelim exams. I’m feeling sort of weak in a few areas, so I’ll try to clear them up here. Lie theory is sort of a downfall for me. I like mathematical structures that are simple. Lie things just have too much going on for me.

G will always denote a Lie group from here on out. Alright, so the first thing is that we have left invariant vector fields on G. These are elements of the Lie algebra of, that will be denoted \frak{g}=Lie(G). The flow of a left-invariant vector field satisfies a nice group law (\theta_t\circ\theta_s)(p)=\theta_{t+s}(p). We want to figure out if this relates at all to the group operation on the Lie group itself (and it should, right?).

We call a Lie group homomorphism F:\mathbb{R}\to G a one-parameter subgroup (note that it is a morphism…not a subgroup). The relation we want to establish is that the one-parameter subgroups are the integral curves of left-invariant vector fields starting at the identity (there is a bijective correspondence between the set of one-parameter subgroups and \frak{g}). Proof:

Let X\in \frak{g}. Then since the vector field is left-invariant, it is L_g-related to itself for any left translation. Thus, by naturality, for any integral curve \gamma, L_g\circ \gamma is also an integral curve. Let’s pick a particularly nice integral curve to try this fact on: the integral curve starting at the identity \theta^{(e)}. Let’s translate by \theta^{(e)}(s) for some fixed s\in G.

Then L_{\theta^{(e)}(s)}(\theta^{(e)}(t)) is an integral curve starting at L_{\theta^{(e)}(s)}(e)=\theta^{(e)}(s). But \theta^{(e)}(s+t) is also an integral curve starting at \theta^{(e)}(s). Thus they are equal. Writing out what left translation means, we get \theta^{(e)}(s)\theta^{(e)}(t)=\theta^{(e)}(s+t). Flows are complete on Lie groups, so \theta^{(e)} : \mathbb{R}\to G and the equality shows it is a homomorphism. Thus it is a one-parameter subgroup.

Now we must show every one-parameter subgroup determines an integral curve through the identity. Suppose F:\mathbb{R}\to G is a one-parameter subgroup. Now d/dt is a left-invariant vector field on \mathbb{R}, so let’s let X=F_*(d/dt)\in \frak{g}. The claim is that F is an integral curve of X.

But F_*(d/dt) is the unique left-invariant vector field that is F-related to d/dt. Thus F'(t_0)=dF_{t_0}\left(\frac{d}{dt}\big|_{t_0}\right)=X_{F(t_0)}. Thus F is an integral curve of X.

Now that we have this correspondence, there is no problem in saying “the” one-parameter subgroup generated by X. Our main example for one-parameter subgroups comes from our favorite Lie group GL_n(\mathbb{R}). If we take a left-invariant vector field A\in \frak{gl}_n(\mathbb{R}). Then the one-parameter subgroup generated by this vector field is F(t)=e^{tA}, the matrix exponential. I should maybe do the computation to show that it works, but it is sort of annoying.

Next post, we’ll generalize this notion of an exponential map.


The Frobenius Theorem

What a joke to think that I could live without the internet.

Recall that the Frobenius Theorem is actually quite a concise statement given what we’ve defined and done. It just says that every involutive distribution is completely integrable.

Let D be involutive of dimension k and M an n-manifold and p\in M. We need a flat chart about p. We will work on a coordinate chart since it is a local property, and hence with loss of generality M=U\subset \mathbb{R}^n we replace the manifold with an open subset of Euclidean space. Suppose Y_1, \ldots , Y_k is a smooth local frame for D. We choose our coordinates such that D_p is complementary to the span of \displaystyle\left(\frac{\partial}{\partial x^{k+1}\big|_p}, \ldots , \frac{\partial}{\partial x^n}\big|_p\right).

Now let \pi:\mathbb{R}^n\to\mathbb{R}^k be the projection onto the first k coordinates. Then we get a bundle morphism d\pi:T\mathbb{R}^n\to T\mathbb{R}^k. Explicitly this is just \displaystyle d\pi\left(\sum_{i=1}^n v^i\frac{\partial}{\partial x^i}\big|_q\right)=\sum_{i=1}^k v^i\frac{\partial}{\partial x^i}\big|_{\pi(q)}.

The morphism is smooth, since it is composition of the inclusion map from the distribution followed by d\pi. Thus, by definition, the component functions in any smooth frame are smooth. In particular, we have smooth frames Y_1, \ldots, Y_k and \frac{\partial}{\partial x^j}\big|_{\pi(q)}. Thus the matrix entries of d\pi\big|_{D_q} with respect to these frames are smooth functions of q.

We also get that at any given point p, this restricted map is bijective as a map to D_p. It is certainly surjective by construction, and when restricted d\pi\big|_{D_p} is complementary to the kernel of the non-restricted bundle map by choice of coordinates. Thus it is bijective in an entire neighborhood of p. Thus we have a smooth inverse in this neighborhood (d\pi\big|_{D_q})^{-1} : T_{\pi(q)}\mathbb{R}^k\to D_q.

Define a frame for D on this neighborhood by X_i\big|_q=(d\pi\big|_{D_q})^{-1}\frac{\partial}{\partial x^i}\big|_{\pi(q)}. Note that if we can show that [X_i, X_j]=0 we will be done. This is because the frame will be a commuting frame, and the “canonical form” for a commuting frame is precisely our definition of a flat chart.

We have that \frac{\partial}{\partial x^i}\big|_{\pi(q)}=\left(d\pi\big|_{D_q}\right)X_i\big|_q=d\pi_q (X_i\big|_q). i.e. we have that X_i and \frac{\partial}{\partial x^i} are \pi-related.

But now we have it by naturality of the Lie bracket, since d\pi_q([X_i, X_j]_q)=[\frac{\partial}{\partial x^i}, \frac{\partial}{\partial x^j}]_{\pi(q)}=0. And the involutivity of D tells us that [X_i, X_j] is in D, and since d\pi on the distribution is injective, [X_i, X_j]_q=0 for every point in the neighborhood.

Quick Notification

I’m planning on moving tomorrow (the person helping me may defer it to Sunday…but that’s another story). One possibility that I’m contemplating is to not get internet at my new place until after prelim exams to force myself to not be constantly distracted. So you may not hear from me for awhile. On the other hand, I’d probably still periodically update from the library or a coffee shop or something.

Distributions and Differential Forms

It’s time to put the development of differential forms to use. To me vector fields, and moreover, distributions are sort of messy and difficult when in the form they were presented in. It turns out we can define distributions in terms of differential forms.

Let’s intuitively figure out how we can do this. If we have a 1-form \omega, then \ker \omega\Big|_p is going to be the span of vectors in the tangent space at p, such that \omega(X_p)=0. So intuitively this should be n-1-dimensional, since the span of the vector such that \omega(X_p)=1 will be the only non-vanishing direction. i.e. 1-forms will define codimension 1 subspaces, or k linearly independent 1-forms will define codimension k subspaces.

So they way we will define a k-distribution, is to choose n-k linearly independent 1-forms \omega^1, \ldots , \omega^{n-k} and define the k-dimensional distribution by D_p=\ker \omega^1\Big|_p\cap \cdots \cap \ker \omega^{n-k}\Big|_p. These are called the local defining forms for D.

Now we should still be a little careful here, since this has all been wishful thinking so far. What we really need is something of the form: Given any k-distribution D\subset TM, then D is smooth if and only if there exists smooth defining forms for D.

This turns out to be the case. The forward direction is just that in any neighborhood we can complete the defining forms to a smooth coframe (\omega^1, \ldots , \omega^n). Then take the dual frame (E_1, \ldots E_n). We see that in this neighborhood D=span\{E_{n-k+1}, \ldots , E_n\}, so by the local frame criterion of last post the distribution is smooth. This argument essentially reverses for the other direction (complete the frame and take the dual coframe).

Now we say that a p-form annihilates the distribution if \omega(X_1, \ldots, X_p)=0 whenever X_1, \ldots, X_p are local sections of D.

It turns out that a p-form \eta annihilates (locally/on some open set) a distribution iff it is of the form \displaystyle\eta=\sum_{i=1}^{n-k}\omega^i\wedge \beta^i where the \omega‘s are the local defining forms and the \beta‘s are smooth (p-1)-forms.

That is just a calculation (maybe not easy), so I will skip its proof. The really interesting stuff is that I can now express the definitions from last post in the language of differential forms. For instance:

A distribution is involutive iff for any smooth 1-form \eta that annihilates it on an open set, we also have that d\eta annihilates it on that same set.

This is fairly straightforward and quick to prove, but seems to me a much nicer way of going about involutivity. Suppose D is involutive and \eta annihilates D on U. Let X, Y\subset D be smooth sections. We calculate that d\eta(X, Y)=X(\eta(Y))-Y(\eta(X))-\eta([X,Y]). But every term on the right is zero by either involutivity or that \eta annihilates D.

Now suppose the hypothesis of the reverse direction. Let X, Y\subset D be smooth local sections and D has the \omega defining forms. Then the same formula form before shows that \omega^i([X,Y])=X(\omega^i(Y))-Y(\omega^i(X))-d\omega^i(X, Y)=0. i.e. [X, Y]\in D, and hence is involutive by the first definition.

Now granted I used some formula that magically did the work for me, but it is fairly standard and in both texts I have on smooth manifolds.

Using the above, we get an even better check for involutivity. TFAE:
1) D is involutive on U
2) d\omega^1, \ldots , d\omega^{n-k} annihilate D
3) There are smooth 1-forms \alpha_j^i such that d\omega^i=\sum \omega^j\wedge\alpha_j^i.

The proof is just applying the few things from this post that we’ve done.


We need to build up a lot of definitions now to properly state the Frobenius Theorem. The main definition will be a distribution on a manifold. Essentially, the theory of distributions is a way to generalize the notion of a vector field and the flow of a vector field.

A distribution is a choice of k-dimensional subspace D_p\subset T_pM at each point of the manifold. Note that this is just a subbundle of the tangent bundle, so we have a nice notion of smoothness. In particular, just as we could check a local frame for smoothness of a vector field (i.e. a 1-dimensional distribution), we can check for smoothness of a distribution by checking if each point has a neighborhood on which there are smooth vector fields X_1, \ldots , X_k such that X_1\big|_q, \ldots , X_k\big|_q forms as basis for D_q at each point of the open set.

The analogous thing for integral curves for vector fields will be what we call an integral manifold. If we think about the natural way to define this we would see that all we want is an immersed submanifold N\subset M such that T_pN=D_p \forall p\in N. Thus in the one dimensional case, the immersed submanifold is just a curve on the manifold.

Unfortunately, it is not the case that integral manifolds exist for all distributions. Our goal is to figure out when they exist. This leads us to our next two definitions. A distribution is integrable if every point of the manifold is in some integral manifold for the distribution. A distribution is called involutive if for any pair of local sections of the distribution, the Lie bracket is also a local section. Note that a local section is really just a vector field where the vectors are chosen from the distribution rather than the whole tangent bundle.

Every integrable distribution is involutive. If D\subset TM is involutive, then given any p\in M and local sections X, Y there is some integral manifold about p, say N. Since both X, Y\in T_pN, we have that [X, Y]_p\in T_pN=D_p, which is the definition of involutive.

This gives us an easy way to see that there are non-integrable distributions (recall, this is not going to happen for 1-distributions, i.e. vector fields, since every point has an integral curve). We don’t even need some weird manifold. Just take \mathbb{R}^3, and let the distribution be the span of two vector fields whose Lie bracket is not in the span. Thus something like \displaystyle D=span\{X=\frac{\partial}{\partial x}+y\frac{\partial}{\partial z}, Y=\frac{\partial}{\partial y}\} will work, since [X, Y]_0=-\frac{\partial}{\partial z}\notin D_0.

I think we need only one more definition to be in a place to move on. A distribution is completely integrable if there exists a flat chart for the distribution in a neighborhood of every point. By this I mean that I can find a coordinate chart such that \displaystyle D=span\{\frac{\partial}{\partial x^1}, \ldots , \frac{\partial}{\partial x^k}\}. This is obviously the strongest condition.

So our definitions at this point satisfy completely integrable distributions are integrable, and integrable distributions are involutive. The utterly remarkable thing that the Frobenius Theorem says, is that all of these implications reverse, and so all of the definitions are actually equivalent! We’ll get there later, though.

The Exterior Derivative

I need to stop putting blogging off until night, because then I just say I’ll do it the next morning, and then I just say I’ll put it off until night, etc. This is the main source of why I haven’t posted anything for awhile.

I think we only need one more tool out of our differential forms bag. Define d:\Omega^k(M)\to \Omega^{k+1}(M) to be the unique \mathbb{R}-linear operator satisfying

1) If \omega\in\Omega^k(M) and \eta\in\Omega^l(M), then d(\omega\wedge\eta)=d\omega\wedge\eta + (-1)^k\omega\wedge d\eta.
2) d\circ d\equiv 0
3) If f\in\Omega^0(M)=C^\infty(M) and X is a smooth vector field, then df(X)=Xf.

Note that this last part is sort of the motivation for this operator. We have that 0-forms are just smooth functions, and that this operator gives the differential of the function. So it is a generalization of the differential to all forms. It is non-trivial to show such an operator exists and is unique, but I’ll define an operator that does the trick next, and leave the checking that it actually works as an exercise.

The above definition tells me almost nothing about how to actually compute what the operator does. Let’s look at it in coordinates. Let \omega=\sum_J \omega_Jdx^J, where \omega_J are smooth functions, and the multi-indexes are in some natural order. Then \displaystyle d\left(\sum \omega_Jdx^J\right)=\sum d\omega_J\wedge dx^J, where we already said that the operator on 0-forms just gives the differential.

Thus we get \displaystyle d \left( \sum \omega_Jdx^{j_1}\wedge \cdots \wedge dx^{j_k}\right) =\sum_{J}\sum_{i}\frac{\partial \omega_J}{\partial x^i} dx^i\wedge dx^{j_1}\wedge \cdots \wedge dx^{j_k}.

There are so many interesting things we can go on to say about this, but none are relevant to near future posts.

I’ll just briefly mention a few. The condition about d\circ d\equiv 0 actually gives us a cochain complex \Omega^0(M)\to \Omega^1(M)\to \Omega^2(M)\to \cdots. If we look at the cohomology of this, we have what is known as de Rham cohomology.

Another interesting bit is that if we take our manifold to be \mathbb{R}^3, then this operator gives us lots of our familiar calc 3 operators (in slightly modified form). For example, the components of the exterior derivative of a 1-form, gives the curl of the components of the 1-form treated as a vector field. Also, the 2-form to 3-form computation similarly gives you the divergence.

One thing that might come up later is that the exterior derivative commutes with pullbacks. By this I mean that if F:M\to N is a smooth map, then F^*(d\omega)=d(F^*\omega). One way to prove this is to show it holds for smooth functions, and then induct and use the formula d(\omega\wedge\eta)=d\omega\wedge (-1)^k\omega\wedge d\eta.

The last sort of interesting thing to mention is that there is a coordinate independent form of the exterior derivative, but I’m pretty sure I’ve never actually used it.

Differential Forms

Let’s just continue onward. We are still miles from where we’re headed. Last time we looked at the bundle of k-tensors (for those that may be starting with this post, my tensors are assumed covariant unless otherwise stated): T^k(T^*M). The time before that we looked at what it meant to be an alternating tensor. So if we just look at \Lambda^k(T^*M)=\sqcup \Lambda^k(T_p^*M) the smooth subbundle of alternating k-tensors, then a section of this bundle is called a differential k-form.

Since it is a smooth subbundle, we still have that the space of smooth sections of \Lambda^k(T^*M) which we denote \Omega^k(M) is an \mathbb{R}-vector space and a C^\infty(M)-module.

I may have run into difficulties. The goal is to define a product pointwise (i.e. to do it for tensors on vector spaces). So I’ll try to work them out on the spot here. It could be useful. I skipped defining the wedge product, because I thought I could get away with not defining the alternation map (which gets a little out of hand at times) if I only worked with alternating tensors. I was convinced that if \omega and \eta were already alternating then \omega\wedge\eta=\frac{(k+l)!}{k!l!}\omega\otimes\eta. This would certainly be the case if \omega\otimes\eta were alternating. Unfortunately, this is not the case.

I do have a cheap fix. There is a unique associative, bilinear, and anticommutative (“supersymmetric” if you are into physics) map \Lambda^k(V^*)\times\Lambda^l(V^*)\to \Lambda^{k+l}(V^*) (call it the wedge product denoted with \wedge) that satisfies the property that for any basis (\epsilon^1, \ldots, \epsilon^n) of V^* and any multi-index I=(i_1, \ldots, i_k) we have \epsilon^{i_1}\wedge \cdots \wedge \epsilon^{i_k}=\epsilon^I.

So this product will turn the vector space of all forms \Lambda(V^*)=\bigoplus\Lambda^k(V^*) into a graded associative algebra sometimes called the exterior algebra, and sometimes called the Grassmann algebra. Note that the above sum terminates at the dimension of V, since n-forms are 0 for n>dim(V).

So we can put this product on \Omega^*(M)=\bigoplus_{k=0}^n \Omega^k(M), by just doing it pointwise (\omega\wedge\eta)_p=\omega_p\wedge\eta_p, to turn this space into an associative, anticommutative graded algebra.

Now if we translate the pullback from last time to this new setting (which isn’t really new at all), then (F^*\omega)_p(X_1, \ldots , X_k)=\omega_{F(p)}(dF_p(X_1), \ldots , dF_p(X_k)).

At this point, I’ve only asserted this wedge product existed (and even uniquely!), and I know of two ways to show its existence. One would be to just write down a product that works (which I’ve almost done all of the work for despite trying to avoid it), the other would be to define it based on a universal property which is cleaner, but I didn’t define my space of alternating tensors in the proper way to easily do that.

I may do one of these next time, or just leave it like this. My main goal is the Frobenius theorem, so differential forms are just one small tool I’ll be using. I didn’t realize they would be this much of nuisance to develop on the way.

Tensor Fields on Manifolds

The language of vector bundles gives us a nice way to transport the structure from last time onto a manifold. We first define the bundle of k-tensors on M (a smooth manifold) by T^k(T^*M)=\sqcup T^k(T_p^*M).

Note the thing on the right is just a disjoint union over all points in the manifold of k-tensors on a given cotangent space (linear functionals on the tangent space), so we can refer to the linear algebra we did last time for this. Of course, if we wanted to think in terms of points like this, then we wouldn’t bother with the bundle at all. Note that the cotangent bundle is just k=1.

As usual we can take sections of this bundle. A section of the bundle of k-tensors is called a k-tensor field. If it is a smooth section, then it is a smooth k-tensor field.

Examples: A (smooth) 0-tensor field is just a (smooth) continuous map. A (smooth) 1-tensor field is just a (smooth) covector field.

Unfortunately, our notation is about to get even more cumbersome, so we’ll alter it a little. The space of smooth sections is denoted \Gamma(T^k(T^*M)) will just be denoted \mathcal{T}^k(M). This is an \mathbb{R}-vector space and a C^\infty(M)-module.

There are a whole set of ways we can check for smoothness. Smoothness for a section is equivalent to checking smoothness of the component functions \sigma_{i_1,\ldots, i_k} where \sigma=\sigma_{i_1,\ldots , i_k}dx^{i_1}\otimes\cdots \otimes dx^{i_k} in coordinates. As usual with these things, another equivalent thing to check is that there is some chart about each point such that the component functions are smooth. Another equivalent thing to check is that for any smooth vector fields X_1,\ldots , X_k, the real valued function \sigma(X_1, \ldots , X_k)(p) is smooth.

Now covariant k-tensor fields have a very nice property with respect to smooth maps. They can be pulled back by them. Suppose F:M\to N is smooth. Then given some S\in T^k(T^*_{F(p)}N) (some k-tensor on N at F(p)) we can define a tensor on M at p called the pointwise pullback of S by F at p dF^*_p(S)\in T^k(T^*_pM) by dF_p^*(S)(X_1, \ldots , X_k)=S(dF_p(X_1), \ldots , dF_p(X_k)). Where X_i were tangent vectors at p. Note that this makes sense, since dF_p(X_i)\in T_{F(p)}N.

Now we go to the pullback of the entire k-tensor field denoted F^*\sigma by doing it pointwise everywhere. i.e. (F^*\sigma)_p=dF_p^*(\sigma_{F(p)}). We have another unfortunate thing to recognize here, the pullback of a covariant tensor field is a contravariant functor.

Let F:M\to N and G: N\to P be smooth, \sigma\in\mathcal{T}^k(N), \tau\in\mathcal{T}^l(N) and f\in C^\infty(N). Then we get the following properties of the pullback, which are fairly easy to check.

1. F^*(f\sigma)=(f\circ F)F^*\sigma.
2. F^*(\sigma\otimes\tau)=F^*\sigma\otimes F^*\tau.
3. F^*(\sigma + \tau)=F^*\sigma+ F^*\tau.
4. F^* is smooth.
5. (G\circ F)^*\sigma=F^*(G^*\sigma).
6. (Id_N)^*\sigma=\sigma.

This puts us in a great place to start in on differential forms next time. I should also try to prevent getting in trouble and say that I’m shamelessly following along with the excellent text Introduction to Smooth Manifolds by John (Jack) Lee (which I’ve probably read more times than any other math text).

Covariant Tensors

The main purpose of my math blogging this summer was to solidify things that I felt a little uncomfortable with that could appear on my qualifying exam in Sept. Well, I’ve sort of limited this to algebra so far, and we’ve gotten far enough that it is sort of out of the scope of the test. In fact, I’m feeling fairly comfortable with the core material of both algebra and complex analysis. So that leaves me with manifolds to blog about.

The problem with manifolds is that to develop things properly takes quite a bit of tedious detail, so I want to go through the Frobenius theorem starting with what tensors are. So I have to start somewhere reasonable and have to skip lots of things. All of a sudden, it seems almost pointless to do the blog thing on it if I’m just going to skip the mechanics that I have to make sure I understand.

So I’m going to try my best to develop things rigorously and carefully, but I’ll probably only fill in things that seem like standard arguments that might come up several times and hence might come up on the exam.

My first assumption is a strong one. I’m working off the assumption that people who want to read this series either know what the tensor product of finite-dimensional vector spaces is or can quickly look it up and figure it out.

Let V be a finite-dimensional vector space. Then a covariant k-tensor is an element of V^*\otimes\cdots\otimes V^* (* will denote the dual space). For our purposes, we really only care about covariant tensors (I won’t go into connections or anything), so I won’t state that anymore, and we denote the vector space of k-tensors T^k(V^*).

We have lots of examples of these already. T^1(V^*)=V^*, since linear functionals are in particular multi-linear. Thus 1-tensors are just covectors. In the language of linear algebra 2-tensors are just bilinear forms. Thus inner products are 2-tensors. Lastly, the determinant is an n-tensor.

Given a choice of basis for V, we have a natural basis for these vector spaces. Let (E_i) be a basis for V, then take the dual basis (\epsilon^i) for V^* (the set such that \epsilon^i(E_j)=\delta_{ij}). Then \{e^{i_1}\otimes\cdots \otimes \epsilon^{i_k} : 1\leq i_1, \ldots, i_k\leq n\} is a basis for T^k(V^*). In particular, we get that dim(T^k(V^*))=n^k.

The types of tensors we really care about are ones called alternating tensors. These are the ones that change sign when two of the arguments are interchanged. Suppose T is a (covariant, last time I promise) k-tensor on V and for any collection X_1, \ldots , X_k\in V, it is alternating if T(X_1, \ldots , X_i, \ldots, X_j, \ldots , X_k)=-T(X_1, \ldots , X_j, \ldots, X_i,\ldots, X_k).

The set of all alternating k-tensors is a subspace of the k-tensors that we will denote \Lambda^k(V^*), and we will call this subspace the space of k-covectors. Note that the alternating property actually gives us a way to calculate any permutation of the arguments: T(X_{\sigma(1)}, \ldots , X_{\sigma(n)})=(sgn\sigma)T(X_1, \ldots, X_k).

Now that that is out of the way, we will place these structures on manifolds next time in the form of tensor fields.

Irreducible Character Basis

I’d just like to expand a little on the topic of the irreducible characters being a basis for the class functions of a group cf(G) from two times ago.

Let’s put an inner product on cf(G). Suppose \alpha, \beta \in cf(G). Then define \displaystyle \langle \alpha, \beta \rangle =\frac{1}{|G|}\sum_{g\in G} \alpha(g)\overline{\beta(g)}.

The proof of the day is that the irreducible characters actually form an orthonormal basis of cf(G) with respect to this inner product.

Let e_i=\sum_{g\in G} a_{ig}g. Then we have that a_{ig}=\frac{n_i\chi_i(g^{-1})}{|G|} (although just a straightforward calculation, it is not all that short, so we’ll skip it for now). Thus e_j=\frac{1}{|G|}\sum n_j\chi_j(g^{-1})g.

So now examine \frac{\chi_i(e_j)}{n_j}=\frac{1}{|G|}\sum \chi_j(g^{-1})\chi_i(g)
=\frac{1}{|G|}\sum \chi_i(g)\overline{\chi_j(g)}
= \langle \chi_i, \chi_j \rangle.

Where we note that since \chi_j is a character \chi_j(g^{-1})=\overline{\chi_j(g)}. Thus we have that \langle \chi_i, \chi_j \rangle = \delta_{ij}.

This fact can be used to get some neat results about the character table of a group, and as consequences of those we get new ways to prove lots of familiar things, like |G|=\sum n_i^2 where the n_i are the degrees of the characters. You also get a new proof of Burnside’s Lemma. I’m not very interested in any of these things, though.

I may move on to induced representations and induced characters. I may think of something entirely new to start in on. I haven’t decided yet.