Now we return to decision theory. It is also a return to the thing that first made me interested in learning something about statistics a year or two ago. I had heard about John Ioannidis’ shocking article “Why Most Published Research Findings are False” and started to investigate. To me statistics was some settled thing that you hit your data with after doing an experiment. It told you whether or not your findings were real and how confident you could be in them.
Moreover, I believed that as long as you followed the prescriptions taught to you, you couldn’t mess it up. It was foolproof. Just look around and try to find one thing that science hasn’t touched. The scientific method has clearly led us to discover something about the world. That’s why stats seemed like an uninteresting part of math. People seemed to have figured it all out. Then I was shocked to find that article. I started to learn about all these fallacies, and methodological problems that I’ve been pointing out over the past few months.
One of the main difficulties, particularly in science, is classical null hypothesis significance testing (NHST). One way to try to mitigate these difficulties is to rephrase our hypothesis test as a Bayesian decision theory problem. This is not the only Bayesian reformulation (Kruschke’s MCMC stuff is pretty cool which I might get to someday), but it fits in as a nice example of the use of decision theory outside of the silly gambling problems I’ve been using.
Let’s start by seeing how to test a point null hypothesis. Think about the biased coin example. We want to test , i.e. is the coin unbiased? This is obviously a ridiculous type of hypothesis test, because the term “unbiased” in real life encompasses a range
where we can’t tell the difference. This is actually the case in most scientific situations as well (there is only so much precision your instruments can achieve), and often scientists incorrectly use a point NHST when there should be a ROPE (region of practical equivalence).
Our first step is to take the previous paragraph’s discussion and cheat a little. Suppose we want to test . The Bayesian way to do this would work out of the box using a ROPE. Unfortunately, if we want continuous densities for the probabilities, then we will always reject our null hypothesis. This is because a point has probability zero. The cheat is to just convert the continuous prior,
, to a piecewise defined prior where we assign a point mass of probability
to and the renormalized old prior otherwise. This is merely saying that we make a starting assumption that
has true value
with probability
, and hence no actual integral needs to be calculated. That is just for intuitive justification for the shape of
. If this makes you uncomfortable, then use the uninformed prior of
having probability
and the alternative having a uniform distribution of mass 1/2.
Let’s recap what we are trying to do. We have two hypotheses. The null which is , and the alternative
. This type of NHST came up in the last post where we wanted to experimentally test whether or not the acceleration due to gravity was
. Our process should be clear if you’ve been following this sequence of posts. We just use our data to calculate the posterior distributions
and
. We must decide between these two by seeing which one has less risk (and that risk will come from a loss function which appropriately penalizes falsely accepting/rejecting each one).
This approach is really nice, because depending on your situation you will want to penalize differently. If you are testing a drug for effectiveness, then it is better to harshly penalize falsely claiming a placebo to be effective (a false positive or Type I error). If you are testing whether or not someone has a fatal disease, then you want to harshly penalize falsely claiming they have it and having them undergo dangerous and expensive unnecessary treatments. Maybe these aren’t the best examples, but you see how having a flexible system could be a lot more useful than blindly running a NHST.
Rather than going through some made up example from fake randomly generated data as I’ve been doing, let’s examine some differences at the theoretical level when we assume everything is normal. Suppose our data is a sample of points from a normal distribution. Any book on Bayesian statistics will have the details on working this out, so I’ll get to the punch line.
If we denote the marginal density, then the posterior distribution for
is given by
In the normal distribution (we assume the prior has standard deviation and the data has
and both have mean
) case we get something much more specific:
where . This term actually appears in classical NHST as well. Let’s look at the differences. For the purpose of getting some numbers down, let’s assume
and
. In a two-tailed test, let’s assume that we observe a
and hence would very, very strongly and confidently reject
. This corresponds to a
-value of
. In this case if
is small, i.e. in the 10-100 range, then the posterior is around
to
. This means that we would likely want to reject
, because it is quite a bit more unlikely than
(this will of course depend on the specifics of our loss function).
Shockingly, if is large, we lose a lot of confidence. If
, then the posterior for
is
. Woops. The Bayesian approach says that
is actually more likely to be true than
, but our NHST gives us
level confidence for rejecting (i.e. there is a 99% chance that our data observations were not a fluke chance and the result that causes us to reject
is real).
As we see, by working with the Bayesian framework, we get posterior probabilities for how likely and
are given our observations of the data. This allows us to do a suitable analysis. The classical framework feels very limited, because even when we get extreme
-values that give us lots of confidence, we could accidentally be overlooking something that would be obvious if we worked directly with how likely each is to be true.
To end this post, I’ll just reiterate that careful scientists are completely aware of the fact that a -value is not to be interpreted as probabilities against
. One can certainly apply classical methods and end with a solid analysis. On the other hand, this is quite a widespread sloppiness or less generously I’ll call it a widespread misunderstanding of what is going on.