I’ve done quite a few decision theory posts at this point, and I think I’m mostly done with it. So to conclude the section I thought I’d leave you with some fun thought experiments having to do with decision theory. You can use your new skills to try to analyze them.
The first thought experiment I want to present has been around since at least the late 60’s. It is generally referred to as Newcomb’s paradox. Here’s the setup. Suppose you encounter a strange being in the forest that can predict your decisions (it’s telepathic or something, just go with it for the purposes of the thought experiment).
They offer you a deal. They present Box A which contains $1,000, and they present Box B which contains either $0 or $1,000,000. You are allowed to take Box B by itself or both Box A and Box B. The being predicts what choice you will make to determine the contents of Box B. If you take only Box B, then they put the $1,000,000 in it. If they predict that you will take both, then they put $0 in. All of this is done ahead of time (because they also correctly predicted that you would walk through this random area of the forest).
An important part of the setup is that the predictor puts the money in ahead of time, so the contents are not determined after you make a decision. The contents cannot change. There are only four total possibilities, so if you use your decision theory skills, then it should be a pretty straightforward calculation to figure out how to maximize your profit. Strangely, this is often referred to as a paradox, because two equally valid sounding arguments lead to different answers.
Here’s one analysis. Suppose this being thinks you will pick both boxes. If you actually pick B, then you get nothing. If you actually pick both, then you get $1,000. Thus picking both gets you a better result in that case. Suppose it thinks you will only pick B. Then if you actually pick both, you get $1,001,000. If you only pick B, then you only get $1,000,000. Thus picking both leads to a better result in that case as well. In fact, picking both clearly gets you more money no matter what the prediction was. Thus picking both maximizes your profit.
The other analysis says that the first one ignored vital information. We can throw out two possibilities, because by the assumption of the thought experiment the prediction will never be wrong. Thus the only two possibilities are that you pick both, in which case you get $1,000 or you pick B in which case you get $1,000,000. Therefore picking only B maximizes your profit.
I won’t present any of the attempted resolutions of this, because I’ve given you some tools to think about it on your own for awhile. I’ll just say that if you Google this, then you will find that tons of famous philosophers and mathematicians have attempted to resolve it. So answers are really easy to find if you get stuck or are curious to read more about it. If you aren’t sure where to start, I highly recommend stuff that Eliezer Yudkowsky has written on it. I dare say he has probably thought about this more deeply than most people.
Another fun and related issue is the idea of acting randomly being the best decision. Suppose you are playing a game in which if you make moves at random, then you have a 1/2 probability of winning. If your opponent can guess what your next move will be, then you only have a 1/4 chance of winning. Games like these are pretty easy to construct, but telling you one isn’t as important as the fact that it has this feature.
In such a situation, if you run your decision theory algorithm and come up with a deterministic set of moves to make that maximizes your chance of winning, then you will almost surely lose. This is because your opponent could figure out what moves you need to make to win and hence figure out which moves you are going to make. In such a situation, the only way to maximize your chance of winning is to ensure that you never make moves according to some rule that your opponent could figure out, i.e. picking a move at random maximizes your chance of winning.
In some sense, if you make your decision according to some random mechanism external to yourself, then you prevent the game from becoming a “Newcomb-like problem.” In fact, some people try to resolve the Newcomb problem with such randomness. Anyway, I thought it would be fun to end this series with something a little lighter.