A Mind for Madness

Musings on art, philosophy, mathematics, and physics


Leave a comment

Irreducible iff simple

Let’s try to be explicit about this, since I feel like I may keep beating around the bush. The reason that we can say things about reducibility of a representation of a group by looking at the simplicity of the modules over the group ring is that they are really the same thing. By this I mean that a k-representation is irreducible (completely reducible) if and only if the corresponding kG-module is simple (semisimple).

Proof: Let \sigma:G\to GL(V) be an irreducible k-representation. Suppose that V^\sigma is not simple. Then there is a proper non-trivial submodule W\subset V^\sigma. By virtue of being a submodule, W is stable under the action of \sigma. i.e. as a vector subspace it is \sigma-invariant, and hence the representation was reducible, a contradiction. Thus V^\sigma was simple. The reverse implication works precisely the same way.

Corollary 1: Maschke’s Theorem tells us that if char(k) does not divide |G|, and if V is a vector space over k, then any representation \sigma : G\to GL(V) is completely reducible.

I know this was a sort of silly post, but I had lots of different things floating around in different worlds, and needed to really clarify that I could not only switch between them, but I could do it in a nice way.

Now I’ve set up the motivation I wanted for Artin-Wedderburn, since it will classify how semisimple rings decompose, which in turn will help us look at how to decompose our representations.


3 Comments

Semisimplicity

There are many ways I could proceed from here, all of which feel like a radical shift. But my goal was Artin-Wedderburn with some applications to representations and group rings, so probably the most important concept of this sequence of posts hasn’t been mentioned at all. This is the notion of being semisimple.

We’ll work from the definition that an R-module, M, is semisimple if every submodule is a direct summand. There are many equivalent ways of thinking of this.

First, note that a submodule of a semisimple module is semisimple. This just requires justifying that intersecting works nicely, and it does (a pretty straightforward exercise if you want to try it). An often useful equivalent condition for semisimple is that M is a direct sum of simple submodules.

The definitions I really want to get to are about rings (which is why I sort of breezed through that first part). A ring R is semisimple if it is a semisimple module over itself. But note that the submodules of R are just the left ideals, so R is semisimple iff every left ideal is a direct summand.

In fact, we have the following equivalent statements:
1) Every R-module is semisimple.
2) R is a semisimple ring.
3) R is a direct sum of a finite number of minimal left ideals.

Proof: 1\Rightarrow 2 is trivial. For 2\Rightarrow 3, we know that the simple submodules of R are the minimal left ideals of R, so R=\oplus_{i\in I}L_i where L_i is minimal. So we just need this sum to be finite. But we know that 1=x_1+\cdots +x_j, a finite sum where x_j\in L_j (reindex if you want rigor with indices since \mathbb{N} isn’t necessarily a subset of I). So for any r\in R, we have r=r1=rx_1+\cdots + rx_j. i.e. R\subset L_1\oplus\cdots \oplus L_j. So R=\oplus_{i=1}^j L_i.

Now we said that a direct sum of minimal left ideals (simple submodules) was an equivalent definition of semisimple, so 3\Rightarrow 2. So for 2\Rightarrow 1, let M be an R-module with R semisimple. Since any R-module is an epimorphic image of a free module, say F=\oplus Ra_i. But each Ra_i\cong R, so they are semisimple. Thus F is semisimple. But then M is a semisimple module.

With a view towards Artin-Wedderburn, I’ll present what is probably the most important example of a semisimple ring.

Let \Delta be a division ring. Then the claim is that M_n(\Delta) is semisimple. Let L_i=\{(0|\cdots | v | \cdots |0) : v\in \Delta^n\}. i.e. we have L_i as the matrix with zeros everywhere except the i-th column, which can be anything from the division ring. Certainly, M_n(\Delta)=L_1\oplus\cdots\oplus L_n. And also each L_i is a left ideal. But why are they minimal?

Suppose some left ideal is properly contained in L\subsetneq L_1. Then there is some V=(v | 0 \cdots 0)\in L_1\setminus L. So take matrix A\in L, we can easily form a matrix B\in M_n(\Delta) such that BA=V (since \Delta is a division algebra) which contradicts L being a left ideal. Thus the L_j are minimal and hence M_n(\Delta) is a semisimple ring.


2 Comments

Representation Theory III

Let’s set up some notation first. Recall that if \phi: G\to GL(V) is a representation, then it makes V into a kG-module. Let’s denote this module by V^\phi. Now we want to prove that given two representations into GL(V), that V^\phi \cong V^\sigma if and only if there is an invertible linear transformation T: V \to V such that T(\phi(x))=\sigma(T(x)) for every x\in G.

The proof of this is basically unwinding definitions: Let T: V^\phi \to V^\sigma be a kG-module isomorphism. Then for free we get T(xv)=xT(v) for x\in G and v\in V is vector space iso. Now note that the multiplication in V^\phi is xv=\phi(x)(v) and in V^\sigma it is xv=\sigma(x)(v). So T(xv)=xT(v)\Rightarrow T(\phi(x)(v))=\sigma(x)(T(v)). Which is what we needed to show. The converse is even easier. Just check that the T is a kG-module iso by checking it preserves scalar multiplication.

This should look really familiar (especially if you are picking a basis and thinking in terms of matrices). We’ll say that T intertwines \phi and \sigma. Essentially this is the same notion as similar matrices.

Now we will define some more concepts. Let \phi: G\to GL(V) be a representation. Then if W\subset V is a subspace, then it is “\phi-invariant” if \phi(x)(W)\subset W for all x\in G. If the only \phi-invariant subspaces are 0 and V, then we say \phi is irreducible.

Let’s look at what happens if \phi is reducible. Let W be a proper non-trivial \phi-invariant subspace.Then we can take a basis for W and extend it to a basis for V such that the matrix \phi(x)=\left(\begin{matrix} A(x) & C(x) \\ 0 & B(x) \end{matrix}\right)
and A(x) and B(x) are matrix representations of G (the degrees being dim W and dim(V/W) respectively).

In fact, given a representation on V, \phi and a representation on W, \psi, we have a representation on V\oplus W, \phi \oplus \psi given in the obvious way: (\phi \oplus \psi)(x) : (v, w)\mapsto (\phi(x)v, \psi(x)w). The matrix representation in the basis \{(v_i, 0)\}\cup \{(0, w_j)\} is just \left(\begin{matrix}\phi(x) & 0 \\ 0 & \psi(x)\end{matrix}\right) (hence is reducible since it has both V\oplus 0 and 0\oplus W as invariant subspaces).

I’m going to continue with representation theory, but I’ll start titling more appropriately now that the basics have sort of been laid out.


2 Comments

Liouville’s Theorem for Projective Varieties?

Wow. I hate looking at the dates on old posts. I think that maybe a few days have gone by, and I’m horrified to find that 11 or 12 days have passed. It is hard to keep track of time in grad school.

The goal of this post is to prove the theorem: If V is an irreducible projective variety over an algebraically closed field k, then every regular function on V is constant. Note this says that \mathcal{O}(V)\cong k. Also, an exercise is to think about how this relates to Liouville’s Theorem if our field is \mathbb{C}.

Proof: Let V be an irreducible projective variety in \mathbb{P}_k^n. WLOG V is not contained in a hyperplane, since then we could just eliminate a variable and work in \mathbb{P}_k^{n-1} and repeat this until it was not in any hyperplane.

Let f\in\mathcal{O}(V). Consider the affine covering from last time V=V_0\cup\cdots\cup V_n. Note that f\big|_{V_i} is regular as an affine morphism on V_i. So we can write this f\big|_{V_i} as a polynomial in x_j/x_i where 1\leq j\neq i\leq n. i.e. we can factor out the homogeneous part of the denominator variable to get \displaystyle f\big|_{V_i}=\frac{g_i}{x_i^{N_i}} where g_i\in S(V)=k[x_0, \ldots, x_n]/I(V) is homogeneous of degree N_i.

But we assumed V irreducible, so I(V) is prime and hence S(V) is an integral domain. Let’s take the field of fractions then, L=Frac(S(V)). Then \mathcal{O}(V), k(V) and S(V) are all embedded in L. So in L we can multiply by that denominator we had before to get x_i^{N_i}f\in S_{N_i}(V).

Recall that S(V) is a graded ring, so I just am denoting S_{N_i}(V) to be the N_i-graded part. Thus if we take any integer N\geq\sum N_i, then S_N(V) is a finite-dimensional k-vector space. Moreover, the monomials of degree N span the space.

Let m\in S_N(V) be a monomial. Then it is divisible by x_i^{N_i} for some i, so mf\in S_N(V). Thus S_N(V)f\subset S_N(V).

So we have a chain: S_N(V)f^q\subset S_N(V)f^{q-1}\subset \cdots \subset S_N(V)f\subset S_N(V). So x_0^Nf^q\in S_N(V) for any q\geq 1. Thus S(V)[f]\subset x_0^{-N}S(V)\subset L.

But x_0^{-N}S(V) is Noetherian, since it is finitely generated as a S(V)-module, so S(V)[f] is also finitely generated over S(V). Thus f is integral over S(V).

i.e. there are a_i\in S(V) such that f^m+a_{m-1}+\ldots + a_0=0. But this shows that f is homogeneous of degree 0. i.e. f\in S_0(V)\cong k. So f is constant.


3 Comments

Krull Dimension

I didn’t actually want to take that long of a break before this post, but I had to do a final exam and give/grade a final, so that ate up lots of time. The next natural thing to move on to is something called Krull dimension. This is sort of annoying to define, but highly useful. I’ve also decided I’m going to stop “fraking” my \frak{p}‘s, since it is annoying to type and just use capital P’s for prime ideals.

First we need to define something I’ll call “height.” A prime chain is a strictly decreasing chain of prime ideals: P_0\supsetneq P_1 \supsetneq \cdots \supsetneq P_n. Now we define the height of a prime ideal P, ht(P), to be the length of the longest prime chain with P=P_0.

Some quick examples: It is easy to check that ht(P)=0 if and only if P is minimal, and hence in an integral domain ht(P)=0 if and only if P=\{0\}. Let R=k[x_1, x_2, \ldots] where k is a field. Then let P_i=(x_i, x_{i+1}, \ldots) be the prime ideal generated by those indeterminants (check that it is prime easily by noting R/P_i\cong k[x_1, \ldots , x_{i-1}] which is clearly an integral domain). Then for any n, we can make a chain P_1\supsetneq P_2\supsetneq \cdots \supsetneq P_{n+1}. Thus ht(P_1)=\infty.

Now for the actual definition we want to work with. I’ll denote the Krull dimension simply by “dim” rather than “Krulldim”. Then we define: dim(R)=\sup\{ht(P) : P\in Spec(R)\}. So our quick example here is that for integral domains, dim(R)=0 if and only if R is a field.

My goal for the day is to characterize all Noetherian rings of dim 0. The claim is that dim(R)=0 if and only if every finitely generated R-module M has a composition series. Since R is Noetherian, there are only finitely many minimal prime ideals. Since dim(R)=0, every prime ideal is minimal and hence there are only finitely many. Let’s call them P_1, \ldots, P_n.

Let’s look at the nilradical: \sqrt{R}=\cap P_i. Since the radical is nilpotent, there is some m such that (\sqrt{R})^m=\{0\}. So we define N=P_1\cdots P_n\subset P_1\cap\cdots\cap P_n=\sqrt{R}, so N^m=\{0\}.

Let M be a finitely generated R-module. Then we have the chain M\supset P_1 M\supset P_1P_2M\supset\cdots\supset NM. Now note that as a module \frac{P_1\cdots P_{i-1}M}{P_1\cdots P_i M} is an R/P_i-module. But P_i is maximal and so R/P_i a field, so it is a vector space. But M is finitely generated, so finite-dimensional, thus we can refine the chain so that all factors are simple.

Now we do this same trick on each of the chains j=1, \ldots, m: N^jM\supset P_1N^jM\supset\cdots \supset N^{j+1}M. Since at the m stage we get N^m=\{0\}, we have a composition series for M.

For the converse suppose every finitely generated R-module has a composition series. Dimension zero is equivalent to showing that R has no prime ideals P, and Q such that P\supsetneq Q. Suppose such exist. Let’s pass to the quotient, R/Q, and reinterpret our hypothesis. Then R is an integral domain that has a nonzero prime ideal and a composition series R\supset I_1\supset \cdots \supset I_d\neq \{0\}. So I_d is minimal. Let x\in I_d be any nonzero element. Then since x I_d\subset I_d and xI_d\neq \{0\} (we’re in a domain), then by minimality we have xI_d=I_d. So there is a y\in I_d such that xy=x, i.e. y=1\in I_d. And hence I_d=R. Thus R is a field which contradicts our having a nonzero prime ideal.

Well, I think that is enough fun for one day. I may post again tomorrow, since my final is Wed.


1 Comment

Generalized HT90

I officially promise this is my last post on Hilbert’s Theorem 90, but because of that it is going to go really fast for those who have not seen group cohomology (it is really cool, so I couldn’t pass it up).

An abelian group is a G-module (G a group) if for all \sigma\in G and a\in A there is a unique element \sigma(a)\in A satisfying two conditions: \sigma(a+b)=\sigma(a)+\sigma(b), and (\sigma\tau)(a)=\sigma(\tau(a)) for all \sigma, \tau\in G and a,b\in A.

Just check any algebra text or here for more information on modules.

Now define an n-cochain of G over A to be a a function of n variables from G into A. If n=0 it is just an element of A. C^n(G, A) is the set of all n-cochains, and can be made into a group by the operation (f+g)(\sigma_1, \ldots, \sigma_n)=f(\sigma_1, \ldots, \sigma_n)+g(\sigma_1, \ldots, \sigma_n).

We can also get from C^n(G, A) to C^{n+1}(G, A) (which is what people who know about cohomology were hoping for), by the function \delta:

(\delta f)(\sigma_1, \ldots, \sigma_{n+1})=\sigma_1(f(\sigma_2, \ldots, \sigma_{n+1}))+

\sum_{i=1}^n(-1)^i f(\sigma_1, \ldots, \sigma_i\sigma_{i+1}, \ldots, \sigma_{n+1})+(-1)^{n+1}f(\sigma_1, \ldots, \sigma_n).

OK. That looks bad, but really in some sense it is the natural choice. I’ll leave it to you to check that this is both a homomorphism and that \delta\delta f=0 (i.e. we have a chain complex).

Now if we label them \delta_0: A\to C^1(G, A)

\delta_1: C^1(G, A)\to C^2(G, A) and

\delta_n: C^n(G, A)\to C^{n+1}(G, A). Then we form \displaystyle H^n(G, A)=\frac{\ker\delta_n}{im\delta_{n-1}}=\frac{Z^n(G, A)}{B^n(G, A)}. We call the elements of Z^n(G, A) the n-cocycles and the elements of B^n(G, A) the n-coboundaries.

So if you don’t like that, we can scratch it now, since in HT90 we only care about H^1(G, A), so let’s take a closer look at that. We can completely classify what the elements of Z^1(G, A) look like. For any f\in Z^1(G, A) and any \sigma, \tau\in G we need (\delta f)(\sigma, \tau)=\sigma(f(\tau))-f(\sigma\tau)+f(\sigma)=0. Which is to say that f(\sigma\tau)=\sigma(f(\tau))+f(\sigma).

Now let’s classify what B^1(G,A) looks like. Well, if g\in B^1(G, A), then g=\delta h for some h\in A=C^0(G,A). So g(\sigma)=(\delta h)(\sigma)=\sigma(a)-a for some a\in A. Well, I think you might be able to see the previous formulation of the theorem coming from unravelling these definitions.

Theorem statement: If K/F is a finite Galois extension and G=Gal(K/F), then H^1(G, K^\times) is trivial.

Proof: Let a be a cocycle. Then let \alpha: K\to K by c\mapsto \sum_{\sigma\in G}a(\sigma)\sigma(c). As in the last post, \alpha is not 0 by linear independence. So let c\in K such that \alpha(c)\neq 0 and set b=\alpha(c).

Then for any \tau\in G we have

\displaystyle\tau(b)=\sum_{\sigma\in G}\tau(a(\sigma)\sigma(c))

\displaystyle =\sum_{\sigma\in G}\tau(a\sigma)(\tau\sigma)(c)

\displaystyle =\sum_{\sigma\in G}a(\tau)^{-1}a(\tau\sigma)(\tau\sigma)(c). Now we use that a is a cocycle (in the kernel) to continue the equality as

\displaystyle = a(\tau)^{-1}\sum_{\sigma\in G}a(\tau\sigma)(\tau\sigma)(c)

=a(\tau)^{-1}b.

Aha, so a(\tau)=b\tau(b)^{-1} is a coboundary! Thus every cocycle is a coboundary, so the quotient is trivial.

Test your understanding by now trying to prove the other formulation as a corollary to this (remember you assume that G is cyclic in that version and have to relate it back to the norm).


Leave a comment

QFT Take 2

Let’s actually try to make some progress on QFT today. There are three parts to make a minimal definition. First, you need a module D over a *-commutative ring. So to get a few definitions on the table. A *-ring, R, is pretty easy. You just have a ring with an antiautomorphism and involutive mapping *: R\to R. This means that (i) (a+b)^*=a^*+b^*, (ii) (ab)^*=b^*a^*, (iii) 1^*=1, and (iv) (x^*)^*=x. So if you’ve seen rings, this shouldn’t be out of grasp. An example would be complex numbers with complex conjugation. A <a href=”http://en.wikipedia.org/wiki/Module_(mathematics)”>module</a&gt; is basically a generalization of a vector space.

The second part is a Hermitian inner product (\cdot, \cdot): D\times D\to \mathbb{R}. So recall that Hermitian just means that it is self-adjoint. You could think of this as when you express the operator as a matrix the conjugate transpose is itself again. Lots of operators satisfy this, like the differential operator. Essentially the property Hermitian is in place, because if something is obsevable then it is Hermitian.

The last part is that we need a *-algebra, \mathcal{A}, of operators acting on D. Let’s jump out to a bigger picture for a second. The details here are sort of the details of getting around a problem. What we really want is basic. We want a Hilbert space H and an operator satisfying the axioms we want. So our field \phi: \mathbb{R}\times M\to M, and our operator defined at each x\in M as \phi(x) (an operator on H). The problem we are skirting is one of how to get around \phi(x)\phi(y) when x and y get arbitrarily close (an uncertainty problem as you might guess).

So we do the standard trick of “smoothing out the singularities.” Instead of points we will use bump functions. A bump function on M is just a smooth function with compact support. We redefine the operator then to be \phi(f)=\int \phi(x)f(x)d^nx. Here is why I jumped out to the big picture we are skirting around. \mathcal{A} is generated by \phi(f).

Some examples will be instructive. Let G be a group and D an orthogonal representation. Then \mathcal{A} is the group-ring of G, with “*” as g^*=g^{-1}. Or we could let L be a Lie algebra acting on a vector space D with an invariant symmetric inner product. The algebra can be \mathcal{A}=U(L) with a^*=a. Or we could take \mathcal{A} as any C^*-algebra or von Neumann algebra and D any Hilbert space that is a *-representation.

These three examples should make us notice something. These are not things physicists typically work with (unless they are doing mathematical foundations of QFT or something). So despite having a definition in place, we might need to make some restrictions or correlations to what computations are being made down the road. These three examples are QFT’s, but that is sort of weird, since we usually speak of “QFT” and not “a QFT” or “this QFT” as if there is only one.

Follow

Get every new post delivered to your Inbox.

Join 172 other followers