Let’s set up some notation first. Recall that if is a representation, then it makes V into a kG-module. Let’s denote this module by . Now we want to prove that given two representations into GL(V), that if and only if there is an invertible linear transformation such that for every .

The proof of this is basically unwinding definitions: Let be a kG-module isomorphism. Then for free we get for and is vector space iso. Now note that the multiplication in is and in it is . So . Which is what we needed to show. The converse is even easier. Just check that the T is a kG-module iso by checking it preserves scalar multiplication.

This should look really familiar (especially if you are picking a basis and thinking in terms of matrices). We’ll say that T intertwines and . Essentially this is the same notion as similar matrices.

Now we will define some more concepts. Let be a representation. Then if is a subspace, then it is “-invariant” if for all . If the only -invariant subspaces are 0 and V, then we say is irreducible.

Let’s look at what happens if is reducible. Let W be a proper non-trivial -invariant subspace.Then we can take a basis for W and extend it to a basis for V such that the matrix

and and are matrix representations of G (the degrees being dim W and dim(V/W) respectively).

In fact, given a representation on V, and a representation on W, , we have a representation on , given in the obvious way: . The matrix representation in the basis is just (hence is reducible since it has both and as invariant subspaces).

I’m going to continue with representation theory, but I’ll start titling more appropriately now that the basics have sort of been laid out.

### Like this:

Like Loading...

*Related*

I write about math, philosophy, literature, music, science, computer science, gaming or whatever strikes my fancy that day.

July 8, 2009 at 8:39 am

I’m not sure, but I think there should be something more general here. Let’s say we have a bifunctor F, from Vect x Vect -> Vect. Say covariant in both variables, such as the tensor product or direct sum. Then F should induce a bifunctor from Mod-G x Mod-G into Mod-G, I think. Indeed, an element of Mod-G is just an element of the category Vect with a group homomorphism G -> Aut(V). So, given two spaces V, W and maps G-> Aut(V), Aut(W), we get a map G->End( F(V,W)) which must land inside the automorphism group since G itself is a group.

For functors which are partially or fully contravariant, there should be something similar except you might have to multiply by g^{-1} for the contravariant part, as you do with Hom.

It would be nice if this were a special case of an even more general phenomenon. It shouldn’t work for algebras in general, but perhaps for semigroups (except Aut wouldn’t be valid any more)?

Pingback: Basics of group representation theory « Delta Epsilons