Surviving Upper Division Math

It’s that time of the year. Classes are starting up. You’re nervous and excited to be taking some of your first “real” math classes called things like “Abstract Algebra” or “Real Anaylsis” or “Topology.”

It goes well for the first few weeks as the professor reviews some stuff and gets everyone on the same page. You do the homework and seem to be understanding.

Then, all of a sudden, you find yourself sitting there, watching an hour-long proof of a theorem you can’t even remember the statement of, using techniques you’ve never heard of.

You panic. Is this going to be on the test?

We’ve all been there.

I’ve been that teacher, I’m sad to say, where it’s perfectly clear in my head that the students are not supposed to regurgitate any of this. The proof is merely there for rigor and exposure to some ideas. It’s clear in my head which ideas are the key ones, though I maybe forgot to point it out carefully.

It’s a daunting situation for the best students in the class and a downright nightmare for the weaker ones.

Then it gets worse. Once your eyes glaze over that first time, it seems the class gets more and more abstract as the weeks go by, filled with more and more of these insanely long proofs and no examples to illuminate the ideas.

Here’s some advice for surviving these upper division math classes. I’m sure people told me this dozens of times, but I tended to ignore it. I only learned how effective it was when I got to grad school.

Disclaimer: Everyone is different. Do what works for you. This worked for me and may only end up frustrating someone with a different learning style.

Tip Summary: Examples, examples, examples!

I used to think examples were something given in a textbook to help me work the problems. They gave me a model of how to do things.

What I didn’t realize was that examples are how you’re going to remember everything: proofs, theorems, concepts, problems, and so on.

Every time you come to a major theorem, write out the converse, inverse, switch some quantifiers, remove hypotheses, weaken hyphotheses, strengthen conclusions, and whatever you can think of to mess it up.

When you do this you’ll produce a bunch of propositions that are false! Now come up with examples to show they’re false (and get away from that textbook when you do this!). Maybe some rearrangement of the theorem turns out to be true, and so you can’t figure out a counterexample.

This is good, too! I cannot overstate how much you will drill into your memory by merely trying unsuccessfully to find a counterexample to a true statement. You’ll start to understand and see why it’s probably true, which will help you follow along to the proof.

As someone who has taught these classes, I assure you that a huge amount of problems students have on a test would be solved by doing this. Students try to memorize too much, and then when they get to a test, they start to question: was that a “for every” or “there exists?” Does the theorem go this way or that?

You must make up your own examples, so when you have a question like that, the answer comes immediately. It’s so easy to forget the tiniest little hypothesis under pressure.

It’s astounding the number of times I’ve seen someone get to a point in a proof where it looks like everything is in place, but it’s not. Say you’re at a step where f: X\to Y is a continuous map of topological spaces, and X is connected. You realize you can finish the proof if Y is connected.

You “remember” this is a theorem from the book! You’re done!

Woops. It turns out that f has to be surjective to make that true.

But now imagine, before the test, you read that theorem and you thought: what’s a counterexample if I remove the surjective hypothesis?

The example you came up with was so easy and took no time at all. It’s f: [0,1] \to \{0\} \cup \{1\} given by f(x) = 1. This example being in your head saves you from bombing that question.

If you just try to memorize the examples in the book or that the professor gives you, that’s just more memorization, and you could run into trouble. By going through the effort of making your own examples, you’ll have the confidence and understanding to do it again in a difficult situation.

A lesser talked about benefit is that having a bunch of examples that you understand gives you something concrete to think about when watching these proofs. So when the epsilons and deltas and neighborhoods of functions and uniform convergence and on and on start to make your eyes glaze over, you can picture the examples you’ve already constructed.

Instead of thinking in abstract generality, you can think: why does that step of the proof work or not work if f_n(x) = x^n?

Lastly, half the problems on undergraduate exams are going to be examples. So, if you already know them, you can spend all your time on the “harder” problems.

Other Tip: Partial credit is riskier than in lower division classes.

There’s this thing that a professor will never tell you, but it’s true: saying wrong things on a test is worse than saying nothing at all.

Let me disclaimer again. Being wrong and confused is soooo important to the process of learning math. You have to be unafraid to try things out on homework and quizzes and tests and office hours and on your own.

Then you have to learn why you were wrong. When you’re wrong, make more examples!

Knowing a bunch of examples will make it almost impossible for you to say something wrong.

Here’s the thing. There comes a point every semester where the professor has to make a judgment call on how much you understand. If they know what they’re doing, they’ll wait until the final exam.

The student that spews out a bunch of stuff in the hopes of partial credit is likely to say something wrong. When we’re grading and see something wrong (like misremembering that theorem above), a red flag goes off: this student doesn’t understand that concept.

A student that writes nothing on a problem or only a very small amount that is totally correct will be seen as superior. This is because it’s okay to not be able to do a problem if you understand you didn’t know how to do it. That’s a way to demonstrate you’re understanding. In other words: know what you don’t know.

Now, you shouldn’t be afraid to try, and this is why the first tip is so much more important than this other tip (and will often depend on the instructor/class).

And the best way to avoid using a “theorem” that’s “obviously wrong” is to test any theorem you quote against your arsenal of examples. As you practice this, it will become second-nature and make all of these classes far, far easier.

Advertisements

The Three-Body Problem is Awesome

If you’ve been around this blog this year, then you know I fell into a bit of a slump. I was reading things, but nothing seemed to connect. In fact, it all seemed derivative, flat, and downright bad.

I’ve gotten out of that somehow, and I seem to have hit a period where most things I read (or movies I see) draw me in immediately and seem imaginative and fresh. I’m not planning on making a bunch of “…is Awesome” posts, but that’s where I’m at right now.

The Three-Body Problem by Liu Cixin is unlike anything I’ve read before. It’s pretty difficult to explain why, because I don’t want to spoil anything. Part of the fun of this trilogy is that there are M. Night Shyamalan type twists (things that make you rethink everything that happened before and make it all make sense).

When these types of plot twists happen once at the end of a book or movie, it feels like a cheap gimmick and can be off-putting. When they happened dozens of times across this book trilogy, they left me in awe of the structure of the narrative.

You’ll think you’ve finally got a grasp on things near the end of Book 2, and then you learn that you had no idea what was really going on. Like I said, there are dozens of these, and each time you think it can’t happen again, it somehow does.

The books are also filled with lots of neat ideas (even if not scientific). I can describe one that happens in the first book that won’t ruin any plot points.

The first idea is to notice what happens if you “unfold” a two-dimensional object into one dimension. Here’s an example of a solid square being pulled into a string:


Now, convince yourself this is the case whenever you take a higher dimensional object and “unfold” it into lower dimensions. You’ll always get an arbitrarily large new thing.

Next, he takes the concept of string theory seriously and says: what if a proton is actually a six-dimensional string curled up into compactified dimensions? Well, with super good technology and a full understanding of the physics, maybe the proton could be unfolded into an arbitrarily large three dimensional object.

In that case, we could store infinite amounts of information in it. We could even make it the best supercomputer AI ever made. Then we could fold it back up, and it would be roughly the size of a proton again. Just imagine what that could do!

The trilogy is truly an “ideas” book. It’s kind of fascinating how strong the ideas alone were to keep me wanting to read. The plot definitely waned at points and character motivations were weak, but I didn’t really care.

To me, this book was essentially the opposite of Seveneves. Seveneves was a bunch of cool ideas that got tedious to read, because none of them served the plot. They were just Neal Stephenson spewing every idea he ever had into a plotless mess.

In contrast, every single cool idea in The Three-Body Problem series advances the plot in a meaningful way, and wow, there’s a ton of them.

I can’t recommend this trilogy enough if you’re into hard sci-fi (and my warning about character/dragging plot doesn’t turn completely alienate you).

Why It Works: Primer

A series in which I oversimplify one concept from a work of literature to make you a better writer.

Time travel sucks as a genre. It’s a bit of a pet peeve of mine. Yes, the whole genre.

Everyone knows about the grandfather paradox: if you travel back in time and kill your grandfather before he conceives with your grandmother, there would be no you to go back in time and kill him.

But many people misinterpret the paradox as being about specific inconsistencies you can trace, when in fact it’s more of a chaos theory issue: the tiniest change of the past could radically change the “present” in unforeseeable ways.

This could happen if the person goes to the past and doesn’t even physically interact with anyone. Merely being seen by a person could alter their day, which leads to change after change after change…

Pretty much every book or movie I’ve seen with time travel has been terrible. It either ignores this problem, has the problem but tries to explain it in an unsatisfactory way, or it succeeds in explaining it but destroys the story in the process.

I honestly believe no one should ever write a time travel story, because it’s going to be a disaster no matter how hard you try. It’s not worth the effort. If I ran an SF magazine, my first rule of submissions would be: no time travel stories (rule 2 would be: no first-contact stories).

But then we wouldn’t have Primer, which actually kind of works. Let’s look at why.

The first thing is that when the main characters go back in time, it’s accidental. This is very important in not creating a causal loop. If your character has to go back in time to change something to save the world, then when they succeed, there will be no reason for them to go back. Hence, the paradoxical loop. Making the initial travel accidental is an interesting way to solve that problem.

The second thing is the physicality. There’s something strange about old-school time travel (think The Time Machine), where a person and/or machine materializes out of nowhere in the past. This doesn’t seem like a problem until you think about it a lot. If the machine wasn’t there in the past, what does it mean that it suddenly is? This is a much deeper philosophical issue than people give it credit for.

Primer brilliantly fixes this problem by making the machine a box that you have to turn on at the time you want to travel back to. So if you turn on the box right now, you can’t use it to travel back before that time. You get in the box at the future time and travel back without running into the physicality problem. You are physically in the box the whole time you’re traveling back.

Primer also solves the problem of interacting with the world by isolating themselves so that they only interact with the world once. This means they aren’t changing the past. They’re living it out for the first time the time they travel back.

But here’s the most important reason Primer succeeds. It is way too confusing to ever know if they’ve run into a paradox. It succeeds because there’s always more to figure out on subsequent viewings.

This sounds like cheating: make your story so confusing that no one knows if there’s a problem. It sounds like bad writing.

But let’s put it in comparison to every other time travel story where it’s immediately obvious that it all falls apart for philosophical and paradoxical reasons. I’d rather be left with the fun journey of trying to piece it together than a pile of unsatisfying nonsense.

If you’ve read a book that handles time travel well, I’d like to hear about it. Despite being a pet peeve of mine, I still masochistically seek them out in hopes of being proved wrong someday.

Year of Short Fiction Part 6: Cosmicomics

I’ve sort of been dreading this one, but it’s the only thing remaining on my short fiction list that I own. Three years ago I wrote up my interpretation of Italo Calvino’s If on a winter’s night a traveler. Calvino can be strange and highly symbolic, but that book’s meaning jumped out at me with little effort. He had constructed a condensed history of critical theory through the story.

I had a vague familiarity with Cosmicomics, so I knew it would be harder. The stories all feature or are told by a character named Qfwfq. Each story starts with a tidbit of science such as:

Situated in the external zone of the Milky Way, the Sun takes about two hundred million years to make a complete revolution of the galaxy.

The story that follows is usually related to this somehow. The collection as a whole can be read as a symbolic retelling of the history of the universe. Calvino has taken real science and created mythologies that actually fit the data.

But it’s more than that. The stories often have a moral to them or a symbolic quality. They aren’t just fictionalizations of the events of the early universe. They’re almost parables like classic mythology. He’s achieved something odd with these.

The collection came out in 1965, fairly early in Calvino’s career, and well before the highly experimental If on a winter’s night a traveler. Calvino believed realism to be dead, and these stories mark his foray into a new type of fiction. He held on to pieces of realism but incorporated highly fantastical elements.

That’s enough of an overview, let’s dig into my favorite story to see these elements at work. “All at One Point” is a story about the Big Bang. More specifically, it’s about the time when the universe existed in a single point.

The beginning of the story comically plays with the idea that “we were all there.” On a scientific level, this is obviously true. Every atom in the universe existed in the singular point “before” the Big Bang. This includes every atom in our bodies, so we were physically there.

Calvino cleverly takes this statement to its extreme form and personifies us as actually existing at one point. The narrator, Qfwfq, says, “…having somebody unpleasant like Mr Pber^t Pber^t underfoot all the time is the most irritating thing.”

The story spends quite a bit of time in a Flatland-type thought experiment. Through humorous interactions, Calvino teases apart a lot of odd ideas about what it actually would mean to collapse the universe to a single point. For example, one couldn’t count how many people were there, because that would require pulling apart, no matter how slightly.

One family, the Z’zu, got labelled “immigrants.” This, of course, makes no sense, because there is no such thing as outside or inside the point. There is no such thing as before or after the point. Time only started at the Big Bang. So the family couldn’t have come from somewhere else.

The humor in this surface-level reading of the story is already worth it, and I won’t spoil any of the other awkward moments shared by these people from all occupying the same point.

Then the story turns its attention to Mrs Ph(i)Nk_o. She is one of the Z’zu, the family everyone hated. But she’s different. She is pure happiness and joy, and no one can say anything bad about her.

In an act of epic generosity, despite what people say about her family, she says:

Oh, if I only had some room, how I’d like to make some tagliatelle for you boys!

That’s what causes the Big Bang. The universe is made and expands and the Sun and planets and everything. It all happened because of a true act of selflessness and love. The phrasing of the final paragraph is very moving. I won’t quote it here, because I think it must be read in context to be appreciated.

The theme, when condensed to a pithy phrase, is something like “love can make universes.” It sounds really cliche and cheesy, and I think this is one of the things that makes these stories so brilliant. In the moment of reading, they feel profound and fresh.

Calvino’s use of vivid space imagery takes you on a grand journey. These cliche themes are the same that one can find in all the great ancient stories. They only feel tired when done in modern stories. By creating his own mythology, Calvino is able to revisit these sorts of themes without embarrassment.

For the Year of Short Fiction, I do want to return to the question of: why short? In other words, does great short fiction have a genuine uniqueness to it, or is it essentially the same as a novel, just shorter?

I think here we can definitively say that this type of writing can only work in short stories. Even expanding one of these to a novella length would be too much. These stories each revolve around a conceit and a theme. The conceit would grow tiresome if done for too long. I cannot imagine a novella of jokes about everyone existing on top of each other. They would lose their impact.

What excites me about Cosmicomics is that this is the first thing I’ve read this year that I feel this way about. I could imagine the novellas I’ve read and even Cthulhu working as full novels. They wouldn’t be as tightly written, but they’d still work. The very nature of Cosmicomics is that they are short stories. I’m glad to have finally found this.

I should stipulate, though, that one can read the entire collection of stories as a novel: an autobiography of Qfwfq’s life and fictionalization of the history of the universe. This is also an interesting and unique aspect, because almost every short story collection I can think of has separate, unrelated stories. This full collection should be read together to get the best experience.

Become a Patron!

I’ve come to a crossroads recently.

I write a blog post every week. It takes time. The last one was close to 2,000 words and required reading a book. For the past three years I’ve been writing full time, and so blogging can be a burden that cuts into this with no monetary rewards.

This blog is now over nine years old, and I’ve done nothing to monetize it. I think this is mostly a good thing. I do not and will not run any sort of advertisements. Even upon the release of my first book, I only did a brief mention and then no promotion afterward (and as far as I can tell, this converted to literally 0 sales).

I want this to be about the blog content. I do not want it to turn into some secret ad campaign to sell my work. I can think of many authors who have done this, and I ended up unsubscribing from them.

This brings me to the point. Putting this much work into something is not really sustainable anymore without some sort of support, so I’ve started a Patreon page. As you’ll see, my initial goal is quite modest and will barely cover the expenses to run my blog and website. But without anything, I will slowly phase out writing here regularly.

If this concept is new to you, Patreon is a site dedicated to supporting creative work. Patrons can pledge money to support people creating content they like. It can be as little as $1 a month (or as many podcasters say: “less than a coffee a month”), and in return, you not only help the site to keep running, you’ll receive bonus content as well.

Because of the scattered nature of my posts, I know a lot of you are probably scared to support, because you might not get content of interest for the month. Some of you like the math and tune out for the writing advice. Some of you like the critical analysis of philosophy and wish the articles on game mechanics didn’t exist.

For consistency, I’ll only put out something that would be tagged “literature” for the vast majority of posts from now on. This means once a month or less and probably never two months in a row (i.e. six per year spread out equally). This “literature” tag includes, but is not limited to, most posts on philosophy that touch on narrative or language somehow, editing rules, writing advice, book reviews, story structure analysis, examining pro’s prose, movie reviews, and so on.

Again, the core original vision for the blog included game and music and math posts, but these will be intentionally fewer now. If you check the past few years, I basically already did this anyway, but this way you know what you’re signing up for.

I think people are drawn to my literature analysis because I’m in a unique position. This month I’m about to submit my fifth romance novel under a pseudonym. This is the “commercial” work I do for money, and it’s going reasonably well. I’ve come to understand the ins and outs of genre fiction through this experience, and it has been a valuable part of learning the craft of writing for me.

My main work under my real name is much more literary. I’ve put out one novel of literary fiction. Next month I’ll put out my second “real” novel, which is firmly in the fantasy genre but hopefully doesn’t give up high-quality prose.

These two opposite experiences have given me an eye for what makes story work and what makes prose work. All over this blog I’ve shown that I love experimental writing, but I’ve also been one of the few people to unapologetically call out BS where I see it.

As you can imagine, writing several genre novels and a “real” novel every year makes it tough to justify this weekly blog for the fun of it.

If I haven’t convinced you that the quality here is worth supporting, I’ll give you one last tidbit. I get to see incoming links thanks to WordPress, so I know that more than one graduate seminar and MFA program has linked to various posts I’ve made on critical theory and difficult literature. Since I’m not in those classes, I can’t be sure of the purpose, but graduate programs tend to only suggest reading things that are worth reading. There just isn’t enough time for anything else.

I know, I know. Print is dead. You’d rather support people making podcasts or videos, but writing is the easiest way to get my ideas across. I listen to plenty of podcasts on writing, but none of them get to dig into things like prose style. The format isn’t conducive to it. One needs to see the text under analysis to really get the commentary on it.

Don’t panic. I won’t decrease blog production through the end of 2017, but I’m setting an initial goal of $100 per month. We’ll go from there, because even that might not be a sustainable level long-term. If it isn’t met, I’ll have to adjust accordingly. It’s just one of those unfortunate business decisions. Sometimes firing someone is the right move, even if they’re your friend.

I’ve set up a bunch supporter rewards, and I think anyone interested in the blog will find them well worth it. I’m being far more generous than most Patreon pages making similar content. Check out the page for details. The rewards involve seeing me put into practice what I talk about with video of me editing a current project with live commentary; extra fiction I write for free; free copies of my novels; extra “Examining Pro’s Prose” articles; and more!

I hope you find the content here worth supporting (I’m bracing myself for the humiliation of getting $2 a month and knowing it’s from my parents). If you don’t feel you can support the blog, feel free to continue reading and commenting for free. The community here has always been excellent.

Mathematical Reason for Uncertainty in Quantum Mechanics

Today I’d like to give a fairly simple account of why Uncertainty Principles exist in quantum mechanics. I thought I already did this post, but I can’t find it now. I often see in movies and sci-fi books (not to mention real-life discussions) a misunderstanding about what uncertainty means. Recall the classic form that says that we cannot know the exact momentum and position of a particle simultaneously.

First, I like this phrasing a bit better than a common alternative: we cannot measure perfectly the momentum and position simultaneously. Although, I guess this is technically true, it has a different flavor. This makes it sound like we don’t have good enough measuring equipment. Maybe in a hundred years our tools will get better, and we will be able to make more precise measurements to do both at once. This is wrong, and completely misunderstands the principle.

Even from a theoretical perspective, we cannot “know.” There are issues with that word as well. In some sense, the uncertainty principle should say that it makes no sense to ask for the momentum and position of a particle (although this again is misleading because we know the precise amount of uncertainty in attempting to do this).

It is like asking: Is blue hard or is blue soft? It doesn’t make sense to ask for the hardness property of a color. To drive the point home, it is even a mathematical impossibility, not just some physical one. You cannot ever write down an equation (a wavefunction for a particle) that has a precise momentum and position at the same time.

Here’s the formalism that lets this fall out easily. To each observable quantity (for example momentum and position) there exists a Hermition operator. If you haven’t seen this before, then don’t worry. The only fact we need about this is that “knowing” or “measuring” or “being in” a certain observable state corresponds to the wavefunction of the particle being an eigenfunction for this operator.

Suppose we have two operators {A} and {B} corresponding to observable quantities {a} and {b}, and it makes sense to say that {\Psi} we can simultaneously measure properties {a} and {b}. This means there are two number {\lambda_1} and {\lambda_2} such that {A\Psi = \lambda_1 \Psi} and {B\Psi = \lambda_2 \Psi}. That is the definition of being an eigenfunction.

This means that the commutator applied to {\Psi} has the property

{[A,B] = AB\Psi - BA\Psi = A\lambda_2 \Psi - B \lambda_1 \Psi = \lambda_2\lambda_1 \Psi - \lambda_1\lambda_2 \Psi = 0}.

Mathematically speaking, a particle that is in a state for which it makes sense to talk about having two definite observable quantities attached must be described by a wavefunction in the kernel of the commutator. Therefore, it never makes sense to ask for both if the commutator has no kernel. This is our proof. All we must do is compute the commutator of the momentum and position operator and see that it has no kernel (except for the 0 function which doesn’t correspond to a legitimate wavefunction).

You could check wikipedia or something, but the position operator is given by {\widehat{x}f= xf} and the momentum is given by {\widehat{p}f=-i\hbar f_x}.

Thus,

\displaystyle \begin{array}{rcl} [\widehat{x}, \widehat{p}]f & = & -ix\hbar f_x + i\hbar \frac{d}{dx}(xf) \\ & = & -i\hbar (xf_x - f - xf_x) \\ & = & i\hbar f \end{array}

This shows that the commutator is a constant times the identity operator. It has no kernel, and therefore makes no sense to ask for a definite position and momentum of a particle simultaneously. There isn’t even some crazy, abstract purely theoretical construction that can have that property. This also shows that we can have all sorts of other uncertainty principles by checking other operators.

Statistical Oddities 5: Sequential Testing

Our next decision theory post is going to be on how to rephrase hypothesis testing in terms of Bayesian decision theory. We already saw in our last statistical oddities post that {p}-values can cause some problems if you are not careful. This oddity makes the situation even worse. We’ll show that if you use a classical null hypothesis significance test (NHST) even at {p=0.05} and your experimental design is to check significance after each iteration of a sample, then as the sample size increases, you will falsely reject the hypothesis more and more.

I’ll reiterate that this is more of an experimental design flaw than a statistical problem, so a careful statistician will not run into the problem. On the other hand, lots of scientists are not careful statisticians and do make these mistakes. These mistakes don’t exist in the Bayesian framework (advertisement for the next post). I also want to reiterate that the oddity is not that you sometimes falsely reject hypotheses (this is obviously going to happen, since we are dealing with a degree of randomness). The oddity is that as the sample size grows, your false rejection rate will tend to 100% ! Usually people think that a higher sample size will protect them, but in this case it exacerbates the problem.

To avoid offending people, let’s assume you are a freshmen in college and you go to your very first physics lab. Of course, it will be to let a ball drop. You measure how long it takes to drop at various heights. You want to determine whether or not the acceleration due to gravity is really 9.8. You took a statistics class in high school, so you recall that you can run a NHST at the {p=0.05} level and impress your professor with this knowledge. Unfortunately, you haven’t quite grasped experimental methodology, so you rerun your NHST after each trial of dropping the ball.

When you see {p<0.05} you get excited because you can safely reject the hypothesis! This happens and you turn in a lab write-up claiming that with greater than {95\%} certainty the true acceleration due to gravity is NOT {9.8}. Let's make the nicest assumptions possible and see that it was still likely for you to reach that conclusion. Assume {g=9.8} exactly. Also, assume that your measurements are pretty good and hence form a normal distribution with mean {9.8}. I wrote the following code to simulate exactly that:

import random
import numpy as np
import pylab
from scipy import stats

#Generate normal sample
def norm():
    return random.normalvariate(9.8,1)

#Run the experiment, return 1 if falsely rejects and 0 else
def experiment(num_samples, p_val):
    x = []
    
    #One by one we append an observation to our list
    for i in xrange(num_samples):
        x.append(norm())
        
        #Run a t-test at p_val significance to see if we reject the hypothesis
        t,p = stats.ttest_1samp(x, 9.8)
        if p < p_val:
            return 1
    return 0

#Check the proportion of falsely rejecting at various sample sizes
rej_proportion = []
for j in xrange(10):
    f_rej = 0
    for i in xrange(5000):
        f_rej += experiment(10*j+1, 0.05)
    rej_proportion.append(float(f_rej)/5000)

#Plot the results
axis = [10*j+1 for j in xrange(10)]
pylab.plot(axis, rej_proportion)
pylab.title('Proportion of Falsely Rejecting the Hypothesis')
pylab.xlabel('Sample Size')
pylab.ylabel('Proportion')
pylab.show() 

What is this producing? On the first run of the experiment, what is the probability that you reject the null hypothesis? Basically {0}, because the test knows that this isn't enough data to make a firm conclusion. If you run the experiment 10 times, what is the probability that at some point you reject the null hypothesis? It has gone up a bit. On and on this goes up to 100 trials where you have nearly a 40% chance of rejecting the null hypothesis using this method. This should make you uncomfortable, because this is ideal data where the mean really is 9.8 exactly! This isn't coming from imprecise measurements or something.

The trend will actually continue, but already because of the so-called {n+1} problem in programming this was taking a while to run, so I cut it off. As you accumulate more and more experiments, you will be more and more likely to reject the hypothesis:

falsereject

Actually, if you think about this carefully it isn’t so surprising. The fault is that you recheck whether or not to reject after each sample. Recall that the {p}-value tells you how likely it is to see these results by random chance supposing the hypothesis is false. But the value is not {0} which means with enough trials you’ll get the wrong thing. If you have a sample size of {100} and you recheck your NHST after each sample is added, then you give yourself 100 chances to see this randomness manifest rather than checking once with all {100} data points. As your sample size increases, you give yourself more and more chances to see the randomness and hence as your sample goes to infinity your probability of falsely rejecting the hypothesis tends to {1}.

We can modify the above code to just track the p-value over a single 1000 sample experiment (the word “trial” in the title was meant to indicate dropping a ball in the physics experiment). This shows that if you cut your experiment off almost anywhere and run your NHST, then you would not reject the hypothesis. It is only because you incorrectly tracked the p-value until it dipped below 0.05 that a mistake was made:

pvals