A Mind for Madness

Musings on art, philosophy, mathematics, and physics


Leave a comment

Basics of Characters

I lived in Alabama over a summer, and I’m pretty sure I’ve never been as hot and drained from the heat as I’ve been over the last couple of days. So it has been hard to get motivated to post lately. I think I’ll just get a few basics down today, since I’m sort of starting a new topic.

Suppose you have a group and a k-representation T: G\to GL(V). Then the character of G afforded by T, is the map \chi : G\to k by \chi(g)=tr(T_g). So it is the mapping into the field that sends an element to the trace of the linear transformation given by the representation. We call the character of an irreducible representation an irreducible character.

So to do our trivial example that we always have, the “principal character” of G is \chi (g)=1. Note that the character is not necessarily a homomorphism.

A nifty fact (one that we’d hope is true) is that equivalent representations afford the same character. This is simply because tr(AB)=tr(BA), so tr(U^{-1}TU)=tr(T), and hence if S is equivalent to T, then S=U^{-1}TU which shows they afford the same character.

Another nifty fact is that characters are constant on conjugacy classes of G. This is for essentially the same reason.

To get a few more basics down, a direct sum representation \oplus T_i affords the character \sum \chi_i with the subscripts matching the representation to its character.

I’m not sure how far I’m going to take this. I have a budding idea for a series of posts, but I’m going to do some hunting first to make sure no one else has already done it.


1 Comment

Class sums

Let’s define a new concept that seems to be really important in algebraic number theory, that will help us peek inside some of the things we’ve been seeing.

Let C_j be a conjugacy class in a finite group. Then we call z_j=\sum_{g\in C_j}g a class sum (for pretty obvious reasons, it is the sum of all the elements in a conjugacy class).

Lemma: The number of conjugacy classes in a finite group G is the dimension of the center of the group ring. Or if we let r denote the number of conjugacy classes, then r=dim_k(Z(kG)).

We prove this by showing that the class sums form a basis. First, given a class sum, we show that z_j\in Z(kG). Well, let h\in G, then hz_j h^{-1}=z_j\Rightarrow hz_j=z_j h, since conjugation just permutes elements of the conjugacy class, thus they live in the right place. They are also linearly independent since the elements of the sums z_j and z_k are disjoint (they are orbits which partition the group) if j\neq k.

Now all we need is that they span. Let u=\sum a_gg\in Z(kG). Then for any h\in G, we have that huh^{-1}=u, so by comparing coefficients, a_{hgh^{-1}}=a_g for all g\in G. This gives that all the coefficients on elements in the same conjugacy class are the same, i.e. we can factor out that coefficient and have the class sum left over. Thus u is a linear combination of the class sums, and hence they span.

As a corollary we get that the number of simple components of kG is the same as the number of conjugacy classes of G. This is because Z(M_{n_i}(k)) is the subspace of scalar matrices. So if there are m simple components, we get 1 dimension for each of these by our decomposition in Artin-Wedderburn and so r=Z(kG)=m.

Another consequence is that the number of irreducible k-representations of a finite group is equal to the number of its conjugacy classes.

The proof is just to note that the number of simple kG-modules is precisely the number of simple components of kG which correspond bijectively with the irreducible k-representations, and now I refer to the paragraph above.

Now we can compute \mathbb{C}S_3 in a different way and confirm our answer from before. We know that it is 6 dimensional, since the dimension is the order of the group. We also know that there are three conjugacy classes, so there are three simple components, so the dimensions of these must be 1, 1, and 4. Thus \mathbb{C}S_3\cong \mathbb{C}\times\mathbb{C}\times M_2(\mathbb{C}).

If we want another quick one. Let Q_8 be the quaternion group of order 8. Then try to figure out why \mathbb{C}Q_8\cong \mathbb{C}^4\times M_2(\mathbb{C}).

So I think I’m sort of done with Artin-Wedderburn and its consequences for now. Maybe I’ll move on to some character theory as Akhil brought up in the last post…


1 Comment

A-W Consequences

I said I’d do the uniqueness part of Artin-Wedderburn, but I’ve decided not to prove it. Here is the statement: Every left semisimple ring R is a direct product R\cong M_{n_1}(\Delta_1)\times\cdots \times M_{n_m}(\Delta_m) where \Delta_i are division rings (so far the same as before), and the numbers m, n_i, and the division rings \Delta_i are uniquely determined by R.

The statement here is important since if we can figure one of those pieces of information out by some means, then we’ve completely figured it out, but I think the proof is rather unenlightening since it is just fiddling with simple components.

Let’s use this to write down the structure of kG where G is finite and k algebraically closed with characteristic not dividing |G|. This is due to Molien: then kG\cong M_{n_1}(k)\times\cdots \times M_{n_m}(k).

By Maschke we know that kG is semisimple, and by Artin-Wedderburn, we get then that kG\cong\prod M_{n_i}(\Delta_i). In fact, the proof of Artin-Wedderburn even tells us that \Delta_i=End_{kG}(L_i) where L_i is a minimal left ideal of kG. Thus, given some minimal left ideal L, it suffices to show that \Delta=End_{kG}(L)\cong k.

Note that \Delta is a subspace of kG as a vector space over k. Thus it is finite dimensional. Now we have both L and \Delta as finite dimensional vector spaces (over k). Let a\in k, then this element acts on L by u\mapsto au. But au=ua, so k\subset Z(\Delta). Choose d\in \Delta, then adjoin it to k: k(d). Since this is commutative, and a subdivision ring, it is a field. i.e. k(d)/k as a field extension is finite, and hence algebraic, so d is algebraic over k. But we assumed k algebraically closed, so d\in k. Thus \Delta=k, and we are done.

As a Corollary to this, we get that under the same hypotheses, |G|=n_1^2+\cdots + n_m^2. This is just counting dimensions under the isomorphism above, since dim_k(kG)=|G| and dim_k(M_{n_i})=n_i^2. Note also that we can always take one of the n_i to be 1, since we always have the trivial representation.

Let’s end today with an example to see how nice this is. Without needing to peek inside or know anything about representations of S_3, we know that \mathbb{C}S_3\cong \mathbb{C}\times\mathbb{C}\times M_2(\mathbb{C}), since the only way to write 6 as the sum of squares is 1+1+1+1+1+1, or 1+1+4, and the first one gives \mathbb{C}^6 which is abelian which can’t happen since S_3 is non-abelian. Thus it must be the second one.


Leave a comment

Artin-Wedderburn

I’m back after a brief hiatus in which I worked through a set of problems on Lie derivatives, flows, and vector fields. At the end of the day, I just never seemed to muster the strength to look at algebra. Here goes.

We need a lemma first (I know. If I carefully had planned this, it would have been taken care of already). Lemma: If R is considered as a left module over itself, then R^{op}\cong End_R(R). The natural map to check is \phi: End_R(R)\to R^{op} by \phi(f)=f(1). It is just routine checking that this works. We are in R^{op} since multiplication gets reversed: \phi(fg)=(f\circ g)(1)=f(g(1)\cdot 1)=g(1)f(1)=\phi(g)\phi(f).

Artin-Wedderburn: A ring R is semisimple iff R is isomorphic to a direct product of matrix rings over division rings.

We already did the sufficient direction. So assume R is semisimple. Then R=B_1\oplus \cdots \oplus B_m where the B_i are direct sums of isomorphic minimal left ideals (decompose into all minimal left ideals L_{j}, and then group isomorphic ones into the B_j). By our above lemma R^{op}\cong End_R(R). As a consequence of Schur’s Lemma, Hom_R(B_i, B_j)=\{0\} when i\neq j.

Thus we now have R^{op}\cong End_R(R)\cong End_R(B_1)\times \cdots\times End_R(B_m). But the B_j can be decomposed into the isomorphic minimal left ideals and we get End_R(B_i)\cong M_{n_i}(End_R(L_i)).

But by Schur again End_R(L_i) is some division ring \Delta_i and hence R^{op}\cong M_{n_1}(\Delta_1)\times\cdots\times M_{n_m}(\Delta_m). So R\cong \left(M_{n_1}(\Delta_1)\right)^{op}\times\cdots \times \left(M_{n_m}(\Delta_m)\right)^{op}.

Note that \left(M_{n_i}(\Delta)\right)^{op}\cong M_{n_i}(\Delta_i^{op}) and that \Delta_i^{op} is also a division ring (now just rename these division rings) to get that R\cong M_{n_1}(\Delta_1)\times\cdots\times M_{n_m}(\Delta_m) where the \Delta_j are division rings.

We immediately get some nice corollaries. One is that a ring is left semisimple iff it is right semisimple, since a ring is left semisimple iff its opposite ring is right semisimple. Another is that a commutative ring is semisimple iff it is a product of fields.

Next time I’ll do the “uniqueness” part and start on some of the group ring and representation theory consequences (of which there are many).


Leave a comment

Irreducible iff simple

Let’s try to be explicit about this, since I feel like I may keep beating around the bush. The reason that we can say things about reducibility of a representation of a group by looking at the simplicity of the modules over the group ring is that they are really the same thing. By this I mean that a k-representation is irreducible (completely reducible) if and only if the corresponding kG-module is simple (semisimple).

Proof: Let \sigma:G\to GL(V) be an irreducible k-representation. Suppose that V^\sigma is not simple. Then there is a proper non-trivial submodule W\subset V^\sigma. By virtue of being a submodule, W is stable under the action of \sigma. i.e. as a vector subspace it is \sigma-invariant, and hence the representation was reducible, a contradiction. Thus V^\sigma was simple. The reverse implication works precisely the same way.

Corollary 1: Maschke’s Theorem tells us that if char(k) does not divide |G|, and if V is a vector space over k, then any representation \sigma : G\to GL(V) is completely reducible.

I know this was a sort of silly post, but I had lots of different things floating around in different worlds, and needed to really clarify that I could not only switch between them, but I could do it in a nice way.

Now I’ve set up the motivation I wanted for Artin-Wedderburn, since it will classify how semisimple rings decompose, which in turn will help us look at how to decompose our representations.


2 Comments

Pulling some stuff together

So I’ve thrown out lots of terms and theorems relating to representation theory, but haven’t really said how any of it relates, so lets try to start putting a few things together before moving on. I threw out the terms reducible and irreducible, but forgot to mention completely reducible. A representation is completely reducible if it can be decomposed as a direct sum of irreducible subrepresentations. Note that this means an irreducible representation is completely reducible.

Let’s start with finite abelian groups where everything is easier. Then we’ll eventually get to the full theory of general finite groups, where essentially all the same statements will hold, but I’ll need bigger theorems and techniques.

I claim that every complex representation of a finite abelian group is completely reducible, and in a very nice way. So by Schur’s Lemma, we get that any irreducible subrepresentation is going to be one-dimensional, since group elements will be acting by scalars. So it will leave lines invariant, and hence I will be looking to decompose into lines somehow.

This becomes fairly clear on how to actually decompose when you think about the fact that matrices that are diagonalizable and commute are actually simultaneously diagonalizable. So let G be a finite abelian group. And diagonalizing is the same thing as the intertwiners we saw before, so any representation of G is isomorphic to one where every group element acts diagonally. So our representation V\cong L_1\oplus \cdots \oplus L_m where dimV=m, and the L_i are lines. Each line is preserved by the action of the group.

Now just use the structure theorem for abelian groups, i.e. that G is isomorphic to a direct sum of cyclic subgroups. Thus given \mathbb{Z}/n, the actions on a line can be thought of as the n roots of unity as our choices for the element of GL(V).

Thus every complex representation of a finite abelian group is completely reducible into 1-dimensional subrepresentations (this will be a special case of something we do later).


Leave a comment

On Artistic Pacing

Pacing is a favorite criticism of art critics. This topic is very relevant to me right now. The past two movies I saw have been heavily criticised for being too slowly paced (Moon and Departures). Two of my favorite albums from this year, Grizzly Bear’s Veckatimist and The Antlers’ Hospice, are very slowly paced. Even I criticized the last book I read on these grounds. It is also Infinite Summer, and certainly Infinite Jest has come under this criticism.

But upon reflection, I don’t think there is any good objective way to evaluate pacing. As I’ve stated in the past, I think there are lots of objective criteria for evaluating and criticizing art (did the person play in tune and in time? how original is the work? did the person make a strong statement? etc), but this definitely seems to be different somehow.

First off, I thought Departures had perfect pacing. When we say something is too slowly paced, really what is being said is that you were bored or there wasn’t enough action for you. Things didn’t change quickly enough to keep your interest. Now worded this way, we see that this really is a fault of the viewer not the artist.

It seems that any reasonable artist will be aware of pacing and will have chosen a certain pacing because they feel it fit or is necessary. David Foster Wallace thought the pacing of Infinite Jest should have been even slower than it was. It wasn’t some accidental misstep or flaw of the artist.

My main guess for the prevalence of this criticism is that our current culture is very much influenced by TV. Pacing is such that you get a full dose of entertainment in one quick sitting. Nothing can ever be fast enough. Things are flicking by so fast that I often become disoriented when I see a TV (I haven’t watched TV in any real sense in over a year). So it is hard to adjust to different pacings of works of art (which must be slower in order to achieve some sort of meaning that is beyond just entertainment). Just because someone is bored easily, shouldn’t mean that there is legitimate grounds for criticism.

Now that being said, the real question I’ve been pondering is just how slow can something be before there is a case to be made. There is a song on Veckatimist that is quite tedious for me to listen to (and in fact was the immediate cause of this post). It just repeats a little too often and I want to skip to the end. But the end, when it finally changes, just isn’t as good without the tedious build up. So in some sense the slow pacing of the song must have been intentional. But was there a way around this? A more interesting way to build maybe?

I’m not sure I explained the issue as thoroughly as I would have liked, and it still remains highly unresolved in my mind. Any thoughts? I feel like any pacing criticism can be converted into a reflection on how patient the viewer is combined with some legitimate criticism in some other area.


1 Comment

Maschke and Schur

As usual, ordering of presenting this material and level of generality are proving to be difficult decisions. For my purposes, I don’t want to do things as generally as they can be done. But on the other hand, most of the proofs are no harder in the general case, so it seems pointless to avoid generality.

First we prove Maschke’s Theorem. Note that there are lots of related statements and versions of what I’m going to write. This says that if G is a finite group and k is a field whose characteristic does not divide the order of the group, then kG is a left semisimple ring.

Proof: We’ll do this using the “averaging operator.” Let’s use the version of semisimple that every left ideal is a direct summand. Let I be a left ideal of kG. Then since kG can be regarded as a vector space over k, I is a subspace. So there is a subspace, V, such that kG=I\oplus V. We are done if it turns out that V is a left ideal.

Let \pi : kG\to I be the projection. (Since any u=i + v uniquely, define \pi(u)=i.) Now it is equivalent to show that \pi is a kG-map, since then it would be a retract and hence I would be a direct summand. Unfortunately, it is not a kG-map. So we’ll force it to be one by averaging.

Let D:kG\to kG by D(u)=\frac{1}{|G|}\sum_{x\in G}x\pi(x^{-1}u). By our characteristic condition, |G|\neq 0.

Claim: Im(D)\subset I. Let u\in kG and x\in G, then \pi(x^{-1}u)\in I by definition, and I a left ideal, so x\pi(x^{-1}u)\in I. So since I an ideal, the sum is in I which shows the claim.

Claim: D(b)=b for all b\in I. This is just computation, x\pi(x^{-1}b)=xx^{-1}b=b, so D(b)=\frac{1}{|G|}(|G|b)=b.

Claim: D is a kG-map. i.e. we want to prove that D(gu)=gD(u) for g\in G and u\in kG. Here the averaging pays off:

\displaystyle gD(u)=\frac{1}{|G|}\sum_{x\in G}gx\pi(x^{-1}u)
\displaystyle = \frac{1}{|G|}\sum_{x\in G} gx\pi(x^{-1}g^{-1}gu)
\displaystyle = \frac{1}{|G|}\sum_{y=gx\in G} y\pi(y^{-1}gu)
= D(gu).

Thus we have proved Maschke’s Theorem.

The other tool we’ll need next time is that of Schur’s Lemma: Let M and N be simple left R-modules. Then every non-zero R-map f:M\to N is an iso. And End_R(M) is a division ring.

Proof: \ker f\neq M since it is a non-zero map. And so ker f=\{0\} since it is a submodule, so we have an injection. Likewise, im f is a submodule, and hence must be all of N, so we have surjection and hence an iso. The other part of the lemma is just noting that since every map in End_R(M) that is non-zero is an iso, it has an inverse.

Next time I’ll talk about how some of these things relate to representations.


3 Comments

Semisimplicity

There are many ways I could proceed from here, all of which feel like a radical shift. But my goal was Artin-Wedderburn with some applications to representations and group rings, so probably the most important concept of this sequence of posts hasn’t been mentioned at all. This is the notion of being semisimple.

We’ll work from the definition that an R-module, M, is semisimple if every submodule is a direct summand. There are many equivalent ways of thinking of this.

First, note that a submodule of a semisimple module is semisimple. This just requires justifying that intersecting works nicely, and it does (a pretty straightforward exercise if you want to try it). An often useful equivalent condition for semisimple is that M is a direct sum of simple submodules.

The definitions I really want to get to are about rings (which is why I sort of breezed through that first part). A ring R is semisimple if it is a semisimple module over itself. But note that the submodules of R are just the left ideals, so R is semisimple iff every left ideal is a direct summand.

In fact, we have the following equivalent statements:
1) Every R-module is semisimple.
2) R is a semisimple ring.
3) R is a direct sum of a finite number of minimal left ideals.

Proof: 1\Rightarrow 2 is trivial. For 2\Rightarrow 3, we know that the simple submodules of R are the minimal left ideals of R, so R=\oplus_{i\in I}L_i where L_i is minimal. So we just need this sum to be finite. But we know that 1=x_1+\cdots +x_j, a finite sum where x_j\in L_j (reindex if you want rigor with indices since \mathbb{N} isn’t necessarily a subset of I). So for any r\in R, we have r=r1=rx_1+\cdots + rx_j. i.e. R\subset L_1\oplus\cdots \oplus L_j. So R=\oplus_{i=1}^j L_i.

Now we said that a direct sum of minimal left ideals (simple submodules) was an equivalent definition of semisimple, so 3\Rightarrow 2. So for 2\Rightarrow 1, let M be an R-module with R semisimple. Since any R-module is an epimorphic image of a free module, say F=\oplus Ra_i. But each Ra_i\cong R, so they are semisimple. Thus F is semisimple. But then M is a semisimple module.

With a view towards Artin-Wedderburn, I’ll present what is probably the most important example of a semisimple ring.

Let \Delta be a division ring. Then the claim is that M_n(\Delta) is semisimple. Let L_i=\{(0|\cdots | v | \cdots |0) : v\in \Delta^n\}. i.e. we have L_i as the matrix with zeros everywhere except the i-th column, which can be anything from the division ring. Certainly, M_n(\Delta)=L_1\oplus\cdots\oplus L_n. And also each L_i is a left ideal. But why are they minimal?

Suppose some left ideal is properly contained in L\subsetneq L_1. Then there is some V=(v | 0 \cdots 0)\in L_1\setminus L. So take matrix A\in L, we can easily form a matrix B\in M_n(\Delta) such that BA=V (since \Delta is a division algebra) which contradicts L being a left ideal. Thus the L_j are minimal and hence M_n(\Delta) is a semisimple ring.


2 Comments

Representation Theory III

Let’s set up some notation first. Recall that if \phi: G\to GL(V) is a representation, then it makes V into a kG-module. Let’s denote this module by V^\phi. Now we want to prove that given two representations into GL(V), that V^\phi \cong V^\sigma if and only if there is an invertible linear transformation T: V \to V such that T(\phi(x))=\sigma(T(x)) for every x\in G.

The proof of this is basically unwinding definitions: Let T: V^\phi \to V^\sigma be a kG-module isomorphism. Then for free we get T(xv)=xT(v) for x\in G and v\in V is vector space iso. Now note that the multiplication in V^\phi is xv=\phi(x)(v) and in V^\sigma it is xv=\sigma(x)(v). So T(xv)=xT(v)\Rightarrow T(\phi(x)(v))=\sigma(x)(T(v)). Which is what we needed to show. The converse is even easier. Just check that the T is a kG-module iso by checking it preserves scalar multiplication.

This should look really familiar (especially if you are picking a basis and thinking in terms of matrices). We’ll say that T intertwines \phi and \sigma. Essentially this is the same notion as similar matrices.

Now we will define some more concepts. Let \phi: G\to GL(V) be a representation. Then if W\subset V is a subspace, then it is “\phi-invariant” if \phi(x)(W)\subset W for all x\in G. If the only \phi-invariant subspaces are 0 and V, then we say \phi is irreducible.

Let’s look at what happens if \phi is reducible. Let W be a proper non-trivial \phi-invariant subspace.Then we can take a basis for W and extend it to a basis for V such that the matrix \phi(x)=\left(\begin{matrix} A(x) & C(x) \\ 0 & B(x) \end{matrix}\right)
and A(x) and B(x) are matrix representations of G (the degrees being dim W and dim(V/W) respectively).

In fact, given a representation on V, \phi and a representation on W, \psi, we have a representation on V\oplus W, \phi \oplus \psi given in the obvious way: (\phi \oplus \psi)(x) : (v, w)\mapsto (\phi(x)v, \psi(x)w). The matrix representation in the basis \{(v_i, 0)\}\cup \{(0, w_j)\} is just \left(\begin{matrix}\phi(x) & 0 \\ 0 & \psi(x)\end{matrix}\right) (hence is reducible since it has both V\oplus 0 and 0\oplus W as invariant subspaces).

I’m going to continue with representation theory, but I’ll start titling more appropriately now that the basics have sort of been laid out.

Follow

Get every new post delivered to your Inbox.

Join 143 other followers