A Mind for Madness

Musings on art, philosophy, mathematics, and physics

Leave a comment

Basics of Characters

I lived in Alabama over a summer, and I’m pretty sure I’ve never been as hot and drained from the heat as I’ve been over the last couple of days. So it has been hard to get motivated to post lately. I think I’ll just get a few basics down today, since I’m sort of starting a new topic.

Suppose you have a group and a k-representation T: G\to GL(V). Then the character of G afforded by T, is the map \chi : G\to k by \chi(g)=tr(T_g). So it is the mapping into the field that sends an element to the trace of the linear transformation given by the representation. We call the character of an irreducible representation an irreducible character.

So to do our trivial example that we always have, the “principal character” of G is \chi (g)=1. Note that the character is not necessarily a homomorphism.

A nifty fact (one that we’d hope is true) is that equivalent representations afford the same character. This is simply because tr(AB)=tr(BA), so tr(U^{-1}TU)=tr(T), and hence if S is equivalent to T, then S=U^{-1}TU which shows they afford the same character.

Another nifty fact is that characters are constant on conjugacy classes of G. This is for essentially the same reason.

To get a few more basics down, a direct sum representation \oplus T_i affords the character \sum \chi_i with the subscripts matching the representation to its character.

I’m not sure how far I’m going to take this. I have a budding idea for a series of posts, but I’m going to do some hunting first to make sure no one else has already done it.

1 Comment

Class sums

Let’s define a new concept that seems to be really important in algebraic number theory, that will help us peek inside some of the things we’ve been seeing.

Let C_j be a conjugacy class in a finite group. Then we call z_j=\sum_{g\in C_j}g a class sum (for pretty obvious reasons, it is the sum of all the elements in a conjugacy class).

Lemma: The number of conjugacy classes in a finite group G is the dimension of the center of the group ring. Or if we let r denote the number of conjugacy classes, then r=dim_k(Z(kG)).

We prove this by showing that the class sums form a basis. First, given a class sum, we show that z_j\in Z(kG). Well, let h\in G, then hz_j h^{-1}=z_j\Rightarrow hz_j=z_j h, since conjugation just permutes elements of the conjugacy class, thus they live in the right place. They are also linearly independent since the elements of the sums z_j and z_k are disjoint (they are orbits which partition the group) if j\neq k.

Now all we need is that they span. Let u=\sum a_gg\in Z(kG). Then for any h\in G, we have that huh^{-1}=u, so by comparing coefficients, a_{hgh^{-1}}=a_g for all g\in G. This gives that all the coefficients on elements in the same conjugacy class are the same, i.e. we can factor out that coefficient and have the class sum left over. Thus u is a linear combination of the class sums, and hence they span.

As a corollary we get that the number of simple components of kG is the same as the number of conjugacy classes of G. This is because Z(M_{n_i}(k)) is the subspace of scalar matrices. So if there are m simple components, we get 1 dimension for each of these by our decomposition in Artin-Wedderburn and so r=Z(kG)=m.

Another consequence is that the number of irreducible k-representations of a finite group is equal to the number of its conjugacy classes.

The proof is just to note that the number of simple kG-modules is precisely the number of simple components of kG which correspond bijectively with the irreducible k-representations, and now I refer to the paragraph above.

Now we can compute \mathbb{C}S_3 in a different way and confirm our answer from before. We know that it is 6 dimensional, since the dimension is the order of the group. We also know that there are three conjugacy classes, so there are three simple components, so the dimensions of these must be 1, 1, and 4. Thus \mathbb{C}S_3\cong \mathbb{C}\times\mathbb{C}\times M_2(\mathbb{C}).

If we want another quick one. Let Q_8 be the quaternion group of order 8. Then try to figure out why \mathbb{C}Q_8\cong \mathbb{C}^4\times M_2(\mathbb{C}).

So I think I’m sort of done with Artin-Wedderburn and its consequences for now. Maybe I’ll move on to some character theory as Akhil brought up in the last post…

1 Comment

A-W Consequences

I said I’d do the uniqueness part of Artin-Wedderburn, but I’ve decided not to prove it. Here is the statement: Every left semisimple ring R is a direct product R\cong M_{n_1}(\Delta_1)\times\cdots \times M_{n_m}(\Delta_m) where \Delta_i are division rings (so far the same as before), and the numbers m, n_i, and the division rings \Delta_i are uniquely determined by R.

The statement here is important since if we can figure one of those pieces of information out by some means, then we’ve completely figured it out, but I think the proof is rather unenlightening since it is just fiddling with simple components.

Let’s use this to write down the structure of kG where G is finite and k algebraically closed with characteristic not dividing |G|. This is due to Molien: then kG\cong M_{n_1}(k)\times\cdots \times M_{n_m}(k).

By Maschke we know that kG is semisimple, and by Artin-Wedderburn, we get then that kG\cong\prod M_{n_i}(\Delta_i). In fact, the proof of Artin-Wedderburn even tells us that \Delta_i=End_{kG}(L_i) where L_i is a minimal left ideal of kG. Thus, given some minimal left ideal L, it suffices to show that \Delta=End_{kG}(L)\cong k.

Note that \Delta is a subspace of kG as a vector space over k. Thus it is finite dimensional. Now we have both L and \Delta as finite dimensional vector spaces (over k). Let a\in k, then this element acts on L by u\mapsto au. But au=ua, so k\subset Z(\Delta). Choose d\in \Delta, then adjoin it to k: k(d). Since this is commutative, and a subdivision ring, it is a field. i.e. k(d)/k as a field extension is finite, and hence algebraic, so d is algebraic over k. But we assumed k algebraically closed, so d\in k. Thus \Delta=k, and we are done.

As a Corollary to this, we get that under the same hypotheses, |G|=n_1^2+\cdots + n_m^2. This is just counting dimensions under the isomorphism above, since dim_k(kG)=|G| and dim_k(M_{n_i})=n_i^2. Note also that we can always take one of the n_i to be 1, since we always have the trivial representation.

Let’s end today with an example to see how nice this is. Without needing to peek inside or know anything about representations of S_3, we know that \mathbb{C}S_3\cong \mathbb{C}\times\mathbb{C}\times M_2(\mathbb{C}), since the only way to write 6 as the sum of squares is 1+1+1+1+1+1, or 1+1+4, and the first one gives \mathbb{C}^6 which is abelian which can’t happen since S_3 is non-abelian. Thus it must be the second one.

Leave a comment


I’m back after a brief hiatus in which I worked through a set of problems on Lie derivatives, flows, and vector fields. At the end of the day, I just never seemed to muster the strength to look at algebra. Here goes.

We need a lemma first (I know. If I carefully had planned this, it would have been taken care of already). Lemma: If R is considered as a left module over itself, then R^{op}\cong End_R(R). The natural map to check is \phi: End_R(R)\to R^{op} by \phi(f)=f(1). It is just routine checking that this works. We are in R^{op} since multiplication gets reversed: \phi(fg)=(f\circ g)(1)=f(g(1)\cdot 1)=g(1)f(1)=\phi(g)\phi(f).

Artin-Wedderburn: A ring R is semisimple iff R is isomorphic to a direct product of matrix rings over division rings.

We already did the sufficient direction. So assume R is semisimple. Then R=B_1\oplus \cdots \oplus B_m where the B_i are direct sums of isomorphic minimal left ideals (decompose into all minimal left ideals L_{j}, and then group isomorphic ones into the B_j). By our above lemma R^{op}\cong End_R(R). As a consequence of Schur’s Lemma, Hom_R(B_i, B_j)=\{0\} when i\neq j.

Thus we now have R^{op}\cong End_R(R)\cong End_R(B_1)\times \cdots\times End_R(B_m). But the B_j can be decomposed into the isomorphic minimal left ideals and we get End_R(B_i)\cong M_{n_i}(End_R(L_i)).

But by Schur again End_R(L_i) is some division ring \Delta_i and hence R^{op}\cong M_{n_1}(\Delta_1)\times\cdots\times M_{n_m}(\Delta_m). So R\cong \left(M_{n_1}(\Delta_1)\right)^{op}\times\cdots \times \left(M_{n_m}(\Delta_m)\right)^{op}.

Note that \left(M_{n_i}(\Delta)\right)^{op}\cong M_{n_i}(\Delta_i^{op}) and that \Delta_i^{op} is also a division ring (now just rename these division rings) to get that R\cong M_{n_1}(\Delta_1)\times\cdots\times M_{n_m}(\Delta_m) where the \Delta_j are division rings.

We immediately get some nice corollaries. One is that a ring is left semisimple iff it is right semisimple, since a ring is left semisimple iff its opposite ring is right semisimple. Another is that a commutative ring is semisimple iff it is a product of fields.

Next time I’ll do the “uniqueness” part and start on some of the group ring and representation theory consequences (of which there are many).

Leave a comment

Irreducible iff simple

Let’s try to be explicit about this, since I feel like I may keep beating around the bush. The reason that we can say things about reducibility of a representation of a group by looking at the simplicity of the modules over the group ring is that they are really the same thing. By this I mean that a k-representation is irreducible (completely reducible) if and only if the corresponding kG-module is simple (semisimple).

Proof: Let \sigma:G\to GL(V) be an irreducible k-representation. Suppose that V^\sigma is not simple. Then there is a proper non-trivial submodule W\subset V^\sigma. By virtue of being a submodule, W is stable under the action of \sigma. i.e. as a vector subspace it is \sigma-invariant, and hence the representation was reducible, a contradiction. Thus V^\sigma was simple. The reverse implication works precisely the same way.

Corollary 1: Maschke’s Theorem tells us that if char(k) does not divide |G|, and if V is a vector space over k, then any representation \sigma : G\to GL(V) is completely reducible.

I know this was a sort of silly post, but I had lots of different things floating around in different worlds, and needed to really clarify that I could not only switch between them, but I could do it in a nice way.

Now I’ve set up the motivation I wanted for Artin-Wedderburn, since it will classify how semisimple rings decompose, which in turn will help us look at how to decompose our representations.


Pulling some stuff together

So I’ve thrown out lots of terms and theorems relating to representation theory, but haven’t really said how any of it relates, so lets try to start putting a few things together before moving on. I threw out the terms reducible and irreducible, but forgot to mention completely reducible. A representation is completely reducible if it can be decomposed as a direct sum of irreducible subrepresentations. Note that this means an irreducible representation is completely reducible.

Let’s start with finite abelian groups where everything is easier. Then we’ll eventually get to the full theory of general finite groups, where essentially all the same statements will hold, but I’ll need bigger theorems and techniques.

I claim that every complex representation of a finite abelian group is completely reducible, and in a very nice way. So by Schur’s Lemma, we get that any irreducible subrepresentation is going to be one-dimensional, since group elements will be acting by scalars. So it will leave lines invariant, and hence I will be looking to decompose into lines somehow.

This becomes fairly clear on how to actually decompose when you think about the fact that matrices that are diagonalizable and commute are actually simultaneously diagonalizable. So let G be a finite abelian group. And diagonalizing is the same thing as the intertwiners we saw before, so any representation of G is isomorphic to one where every group element acts diagonally. So our representation V\cong L_1\oplus \cdots \oplus L_m where dimV=m, and the L_i are lines. Each line is preserved by the action of the group.

Now just use the structure theorem for abelian groups, i.e. that G is isomorphic to a direct sum of cyclic subgroups. Thus given \mathbb{Z}/n, the actions on a line can be thought of as the n roots of unity as our choices for the element of GL(V).

Thus every complex representation of a finite abelian group is completely reducible into 1-dimensional subrepresentations (this will be a special case of something we do later).

Leave a comment

On Artistic Pacing

Pacing is a favorite criticism of art critics. This topic is very relevant to me right now. The past two movies I saw have been heavily criticised for being too slowly paced (Moon and Departures). Two of my favorite albums from this year, Grizzly Bear’s Veckatimist and The Antlers’ Hospice, are very slowly paced. Even I criticized the last book I read on these grounds. It is also Infinite Summer, and certainly Infinite Jest has come under this criticism.

But upon reflection, I don’t think there is any good objective way to evaluate pacing. As I’ve stated in the past, I think there are lots of objective criteria for evaluating and criticizing art (did the person play in tune and in time? how original is the work? did the person make a strong statement? etc), but this definitely seems to be different somehow.

First off, I thought Departures had perfect pacing. When we say something is too slowly paced, really what is being said is that you were bored or there wasn’t enough action for you. Things didn’t change quickly enough to keep your interest. Now worded this way, we see that this really is a fault of the viewer not the artist.

It seems that any reasonable artist will be aware of pacing and will have chosen a certain pacing because they feel it fit or is necessary. David Foster Wallace thought the pacing of Infinite Jest should have been even slower than it was. It wasn’t some accidental misstep or flaw of the artist.

My main guess for the prevalence of this criticism is that our current culture is very much influenced by TV. Pacing is such that you get a full dose of entertainment in one quick sitting. Nothing can ever be fast enough. Things are flicking by so fast that I often become disoriented when I see a TV (I haven’t watched TV in any real sense in over a year). So it is hard to adjust to different pacings of works of art (which must be slower in order to achieve some sort of meaning that is beyond just entertainment). Just because someone is bored easily, shouldn’t mean that there is legitimate grounds for criticism.

Now that being said, the real question I’ve been pondering is just how slow can something be before there is a case to be made. There is a song on Veckatimist that is quite tedious for me to listen to (and in fact was the immediate cause of this post). It just repeats a little too often and I want to skip to the end. But the end, when it finally changes, just isn’t as good without the tedious build up. So in some sense the slow pacing of the song must have been intentional. But was there a way around this? A more interesting way to build maybe?

I’m not sure I explained the issue as thoroughly as I would have liked, and it still remains highly unresolved in my mind. Any thoughts? I feel like any pacing criticism can be converted into a reflection on how patient the viewer is combined with some legitimate criticism in some other area.


Get every new post delivered to your Inbox.

Join 169 other followers