Bayesian vs Frequentist Statistics


I was tempted for Easter to do an analysis of the Resurrection narratives in some of the Gospels as this is possibly even more fascinating (re: the differences are starker) than our analysis of the Passion narratives. But we’ll return to the Bayesian stuff. I’m not sure what more to add after this discussion, so this topic might end. I feel like continually presenting endless examples of Bayesian methods will get boring.

Essentially everything in today’s post will be from Chapter 8 of Nate Silver’s book The Signal and the Noise (again from memory so hopefully I don’t make any major mistakes and if so don’t think they are in the book or anything). I should say this book is pretty good, but a large part of it is just examples of models which might be cool if you haven’t been thinking about this for awhile, but feels repetitive. I still recommend it if you have an interest in how Bayesian models are used in the real world.

Today’s topic is an explanation of essentially the only rival theory out there to Bayesianism. It is a method called “frequentism.” One might refer to this as “classical statistics.” It is what you would learn in a first undergraduate course in statistics, and although it still seems to be the default method in most fields of study, recently Bayesian methods have been surging and may soon replace frequentist methods.

It turns out that frequentist methods are newer and in some sense an attempt to replace some of the wishy-washy guess-work of Bayesianism. Recall that Bayesianism requires us to form a prior probability. To apply Bayes’ theorem we need to assign a probability based on … well … prior knowledge? In some fields like history this isn’t so weird. You look at similar cases that have been examined already to get the number. It is a little more awkward in science because when calculating P(C|E) the probability a conjecture is true given the evidence, you need to calculate P(C) which is your best guess at the probability your conjecture is true. It feels circular or like you can rig it so that you assume your experiment into a certain conclusion.

The frequentist will argue that assigning this probability involves all sorts of bias and subjectivity on the part of the person doing the analysis. Now this argument has been going in circles for years, but we’ve already addressed this. The Bayesian can just use probabilities that have a solid rationale that even opponents of the conclusion will agree to, or could make a whole interval of possible probabilities. It is true that the frequentist has a point, though. The bias/subjectivity does exist and an honest Bayesian admits this and takes precaution against it.

The frequentist method involves a rather simple idea (that gets complicated fast as anyone who has taken such a course knows). The idea is that we shouldn’t stack the odds for a conclusion by subjectively assigning some prior. We should just take measurements. Then, only after objective statistical analysis, should we make any such judgments. The problem is that when we take measurements, we only have a small sample of everything. We need a way to take this into account.

To illustrate using an example, we could do a poll to see who people will vote for in an election. We’re only going to poll a small number of people compared to everyone in the country. But the idea is that if we use a large enough sample size we can assume that it will roughly match the whole population. In other words, we can assume (if it was truly random) that we haven’t accidentally gotten a patch of the population that will vote significantly differently than the rest of the country. If we take a larger sample size, then our margin of error will decrease.

But built into this assumption we already have several problems. First is that hidden behind the scenes is that we must assume the voting population falls into some nice distribution for our model (for example a normal distribution). This is actually a major problem, because depending on what you are modelling there are different standards for what type of distribution to use. Moreover, we assume the sampling was random and falls into this distribution. These are two assumption that usually can’t be well-justified (at least until well after the fact when we see if its predictive value was correct).

After that, we can figure out what our expected margin of error will be. This is exactly what we see in real political polling. They give us the results and some margin of error. If you’ve taken statistics you’ve probably spent lots of time calculating these so-called “confidence intervals.” There are lots of numerics such as p-values to tell you how significant or trust-worthy the statistics and interval are.

Richard Carrier seems to argue in Proving History that there isn’t really a big difference between these two viewpoints. Bayesianism is just epistemic frequentism. They are just sort of hiding the bias and subjectivity in different places. I’d argue that Bayesian methods are superior for some simple reasons. First, the subjectivity can be quantified and put on the table for everyone to see and make their own judgments about. Second, Bayesian methods allow you to consistently update based on new evidence and takes into account that more extraordinary claims require more extraordinary evidence. Lastly, you are less likely to make standard fallacies such as the correlation implies causation fallacy.

For a funny (and fairly accurate in my opinion) summary that is clearly advocating for Bayesian methods see this:

Baysian_vs_Frequentist

Advertisements

3 thoughts on “Bayesian vs Frequentist Statistics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s