## Blogging Birthday

This is just a quick post to celebrate my blog’s fifth birthday (aka blogiversary).

I thought this image was appropriate (to avoid getting sued, if you like it or are in the market for other birthday cards I found it under the greeting cards section at Cafepress). Let this be a warning to any potential new bloggers out there. If you stick it out for five years, then you’ll be embarrassed by your earlier posts. I’m sure in five more years I’ll be embarrassed by my current posts. This is good. If you go five years and aren’t embarrassed, then what have you been doing for those five years? It’s OK, but be warned.

Quick statistics: I’ve had 137,185 views and 561 comments. Far and away my most common post is still the one on analyzing Lost in the Funhouse by Barth. I feel bad for literature professors across the country that have to keep reading rehashes of my post on this every time they assign the book or story (some of these students even comment that this is what they are doing).

## One small part of my research process

I don’t usually do this, but I’ll piggy back off a question from another blog since it is kind of fun and definitely useful to see how other people approach things. The question came from the AMS Grad Student Blog. Here’s a glimpse into my research process:

This was also an experiment to see if I could publish directly from a document created in Google Drive (which you can!). What’s your research process like?

## The Myth of a Close Election

Before presenting the argument for why the current US presidential race is not as close as it seems, let’s first get out of the way some of the major reasons why it seems close. First, there is the bias of the media. What is more exciting: watching a photo finish win of .01 seconds or watching someone take the lead early and never give up ground?

All of our news sources have a vested financial interest in portraying the presidential race as close. It keeps people coming to their webpages or turning on the news to see how things have changed. The myth was created because it sells. But this is far, far from the only reason someone would want to keep this myth alive.

If your candidate is the one that has the bad odds, then you will want to portray the race as close to keep up hope. No one wants to admit (or even believe) that they are going to lose. This is about as old and established a cognitive bias as you can find. What about if your candidate is the one that is ahead? Well, there is very good reason to portray the race as close in this case as well. If all the people that are going to vote think it is an easy win, the turnout may go down causing a sudden upset that shouldn’t have happened.

This gives good reason for even non-news sources to keep up the impression that it is a close race. In fact, I can’t think of any reason someone would not have some interest in skewing the numbers to make the race seem closer than it is. In our weird system of the electoral college, it is actually quite easy to keep this myth up. You just give standard national polling data. All of a sudden it looks like a dead even match. One day one candidate is up, the next day the other. Back and forth it goes. How exciting!

It turns out that the most careful analysis out there, Nate Silver at Fivethirtyeight, has as of yesterday a 79% chance of Obama winning and a 21% chance of Romney winning. Before discussing what this means, I’ll first point out that this is a true professional statistical analysis. He uses tons of polls in all of the states (sometimes 10 for a single state!) and not just one that suits his purpose. He takes into account noise and how historically accurate the polls have been at different times leading up to the election. It is a fully developed statistically model (as opposed to places like Real Clear Politics which takes straight from polls without filtering through a model).

He has used his methods in predicting sports and elections in the past and has an impeccable track record for accuracy. Now that that is out of the way, what do the numbers mean? Well, they mean what they say. A better tactic is to point out what they don’t mean. A 79% chance of a win is not a sure thing. In fact, people go to Vegas and play odds much, much (much, much) worse than 21% and win! Does this mean they had better odds than predicted? No. It means sometimes you win when you have .0001% chance at winning.

This is statistics we’re talking about, so there is never any sort of guarantee. If Romney wins, will Nate Silver be wrong? No! And that is the crucial point. If every time Silver gave 79/21 odds, the person with the 79% chance of winning won, then that would definitively prove Nate’s model incorrect. In order for the model to make accurate predictions it turns out that 21% of the time that he makes this 79/21 prediction, the person with the 21% chance of winning has to win.

You can go to his blog and check out his methods for yourself, but his track record should give us a bit of confidence that he knows what he’s doing. Now back to the original question: Is the US presidential race close? Armed with these stats, the answer is subjective and you can decide for yourself. To me it would be a lie to say that it is some sort of blowout, but under no stretch of the imagination is 4-1 odds close. I’m not a betting man, but I’d easily take the occasional 4-1 odds, and that says to me that it isn’t a very close race.

## Quick Update

I’ve updated the theme to the blog for those that actually visit the page itself. The other was a year or two old and for some reason was causing issues from being outdated.

## Mirror Symmetry

Well, I keep putting off writing a new post because I’m not sure what I’m going to do it on. I had an idea. I work with Calabi-Yau varieties a lot, and so inevitably the term “mirror symmetry” appears all over the place. I’m mostly interested in arithmetic properties of Calabi-Yau’s where mirror symmetry doesn’t apply, so I know absolutely nothing about it. Since I’m curious what is meant when people use this term I thought I’d do a series of posts trying to explain the main idea of mirror symmetry.

In fact, a week or so ago Matt Ballard (who graduated the year I started grad school at the same school) put up on the archive a really nice introduction to the subject. This means I even have a nice reference to work with now. Here’s the problem. To type up something fairly reasonable on the subject is going to be a major undertaking. I have very little old blog material that is relevant, so I’ll basically be starting from scratch. It is also going to be quite time consuming since I know nothing about it.

This is my dilemma. I’d love to learn about it and blog about it, but I’m not sure it is worth the time and effort it will take. I’m at that point of grad school where I probably shouldn’t go off on a wild tangent for a long time just because I want to know what this term means when it has basically nothing to do with my research. On the other hand, one of the purposes of this blog is to keep me doing things that aren’t directly related to my research so that I maintain some sort of breadth.

I’m just throwing the idea out there for you all. What do you think? Do you have an interest in hearing about (homological) mirror symmetry, because if there seems to be interest, then it is probably worth doing.

## Slowdown on Blogging

Recently I’ve been retyping probably 12 of my past blog posts that were on height and p-divisible groups for nLab articles. The original purpose of my blogging was to help myself learn things by explaining them to the internet. The things I was blogging about then were so basic that they existed in many, many forms all over the place, so it didn’t really matter where I was typing them up.

Here’s the state of things right now. My blog doesn’t get much traffic, and gets even less conversation. If my real purpose is to help myself learn things, then I can do exactly what I’m doing on this blog but over at the nLab. The difference is that at the nLab there will be more traffic, so it is a more useful place to publish. There is also the nForum where I can ask questions and get more points of view about the things I’m writing, i.e. more conversation. Ultimately, it seems like that will be much more beneficial for my learning than this isolated blogging activity.

Lastly, I’m probably going to want to import lots of what I blog into the nLab anyway, which is what I’ve been finding myself doing for the past few days. It feels really annoying and wasteful to be redoing all these posts I’ve already done, so I’d just rather start there.

Here is what I plan to use the blog for (math-wise) from now on. If I have interesting examples or computations I’ll probably put them here since that won’t fit as easily into the nLab. If I’m in really, really early stages of learning something I’ll probably start tinkering with the idea here until I can come up with better ways of writing it for the nLab. This means I’ll probably continue the crystalline stuff here since it is just a mess of information right now.

In summary, I’ll still be blogging, but much less frequently.

## Fun Statistics From WordPress

WordPress emailed this to me. I definitely didn’t post as much as I had hoped to. Only 26 posts for the entire year. I don’t know why I even try. Since February of 2008, my post Lost in the Funhouse has been the top post every single week. I may as well shift the blog to a completely literary blog. Anyway, enjoy, hopefully later today I’ll do another Stacks post.

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

## Crunchy numbers

The average container ship can carry about 4,500 containers. This blog was viewed about 22,000 times in 2010. If each view were a shipping container, your blog would have filled about 5 fully loaded ships.

In 2010, there were 26 new posts, growing the total archive of this blog to 244 posts. There were 9 pictures uploaded, taking up a total of 244kb. That’s about a picture per month.

The busiest day of the year was March 3rd with 189 views. The most popular post that day was Lost in the Funhouse.

## Where did they come from?

The top referring sites in 2010 were terrytao.wordpress.com, wiki.henryfarrell.net, amathew.wordpress.com, en.wordpress.com, and onlinedegree.net.

Some visitors came searching, mostly for lost in the funhouse, lost in the funhouse analysis, john barth lost in the funhouse analysis, normal basis theorem, and galois descent.

## Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

Lost in the Funhouse February 2009

2

Me? April 2008

3

Measure Decomposition Theorems July 2008

4

The Normal Basis Theorem August 2009

5

The Tangent Bundle is Orientable September 2009

## What to do next

Awhile ago I was talking mainly about algebra. I was mostly going by what I was reading in Matsumura, but recently the chapters in Matsumura that I’ve been reading I’ve already covered in previous posts, so I didn’t want to reopen those topics. This has been one of the main causes of my hiatus. My guess is that in general this quarter will be a lot sparser in posts.

Two ideas of where to go while I wait for my algebra to get back to a point where I can start posting again is to start in on some of the basic constructions underlying modern algebraic geometry, like developing sheaves and schemes. This seems to have been covered in a few other blogs, so I’m hesitant.

The other thing that would be really neat, and I’m pretty sure has not been covered by any other blog is to talk about spectral sequences. Right now I’m in a homological algebra class that is essentially devoted to them. I think they are quite a fascinating and brilliant construction. The main thing I’d like to do on this blog is to see where they come up in algebraic geometry, since this particular application is sure to not come up in class.

The problem with this is that it would be an incredibly massive undertaking. Just motivating and explaining the construction would take a long time. I would also really need to figure out a good way of doing diagrams in wordpress.

In any case, just thought I’d check in and see if anyone had opinions one way or another.

I’m planning on moving tomorrow (the person helping me may defer it to Sunday…but that’s another story). One possibility that I’m contemplating is to not get internet at my new place until after prelim exams to force myself to not be constantly distracted. So you may not hear from me for awhile. On the other hand, I’d probably still periodically update from the library or a coffee shop or something.

## Representation Theory II

I warned that I know nothing about representation theory and the point of these posts was to learn more. Well, I must sincerely thank Zygmund (hopefully I’ll receive useful comments like that one throughout the series). Although I’m not surprised about the functorial relation, neither one of the books I’m using talk at all about it. So I dug a little deeper today to figure this out.

It isn’t as straightforward as I thought (we aren’t talking in the category of groups), but it also isn’t hard once you get your head in the right place. We want the representation to be a functor. But the representation is a hom from a group to invertible transformations of a vector space. Thus, our domain category is going to be a category of one object with morphisms given by the group elements. Composition of these morphims is naturally the group multiplication (note that all morphisms are invertible since all group elements have an inverse). Our range space is the category of vector spaces ($Vect$) over k.

Now our representation is a functor $\phi : G\to Vect$. We specify which vector space we are going to by taking our one object there: $\phi(*)=V$. The other aspect of the functor (the only data we cared about in the last post) is where morphisms are sent, i.e. group elements (morphisms in *) get sent to invertible linear maps of V. Perfect. Representations are now packaged in the language of categories which gives us something to work with.

Alright, I’ll now say a few words about what I actually promised to post on today. Maybe I’ll come back and rephrase things categorically as much as possible, but it definitely strays from both of the books I’ve been reading.

A quick observation: Any k-representation $\phi: G\to GL(V)$ equips V with the structure of a left kG-module, and any kG-module V determines a k-representation $\phi: G\to GL(V)$.

Proof of observation: Let $\phi: G\to GL(V)$ be a representation, then for ease let $\phi(g)=\phi_g$. We have an action $kG\times V\to V$ by $\displaystyle \left(\sum_{g\in G}a_g g\right)v=\sum_{g\in G}a_g\phi_g(v)$, so given this multiplication V is a left kG-module.

Now let V be a left kG-module. Then $v\mapsto gv$ is a linear transformation of V with inverse $v\mapsto g^{-1}v$, so call this transformation $T_g\in GL(V)$. Then clearly $g\mapsto T_g$ is a k-representation.

For another two quick examples from the past, we clearly have the trivial kG-module determined by the trivial representation. Another more interesting example is to let $G=Gal(E/k)$ where the field extension $E/k$ is Galois. Then E is a kG-module by $\displaystyle \left(\sum_{\sigma\in G}a_\sigma \sigma\right)(e)=\sum_{\sigma\in G} a_\sigma \sigma(e)$.

Next time we’ll want to figure out when two kG-modules determined by representations are isomorphic.