A Mind for Madness

Musings on art, philosophy, mathematics, and physics


1 Comment

Why Play Roguelikes?

In the past I’ve written about video games as an important experience for people who value art. In honor of junethack, a month-long NetHack tournament, I want to defend roguelike games in general, and NetHack in particular, as a means of providing an experience that is difficult to get from most art.

I should first tell you what a roguelike game is. Roughly speaking it is a game that reproduces a few of the key innovations from Rogue, a game that released in 1980! There’s a lot of debate about what constitutes a roguelike game, but that is besides the point of this post. For us, there are two main ideas.

The first is that the levels are randomly generated. This means that every time you play you will have no idea what the levels will look like. This makes the process of discovery fun even after playing it 100 times. In fact, your character is often randomly generated along with items, weapons, and so on. This means that no matter how many times you play, you will have to think on your feet for how best to deal with the situations you are given (this will come up later).

The other main, and arguably the most important, feature is so-called “permadeath,” which stands for permanent death. This means that if your character dies, then the game is over. You must start all over again from scratch. You can’t save and restart from where you made that mistake. You don’t have multiple lives. This feature is probably what turns most modern gamers off of the style. It is brutal and unforgiving. One small mistake made while you zoned out for a few seconds can cost you hours or even days of work.

Despite the seemingly unfair nature of these games (randomness + permadeath can mean something totally out of your control ends hours of work), they still seem to thrive. People in my circles enjoy modern indie roguelikes such as The Binding of Isaac, Spelunky, and FTL (Faster than Light). Every year there is the 7 Day Roguelike Challenge where you create a roguelike in 7 days. There are even cash prize tournaments such as this recent one for Isaac.

This brings me to the current NetHack tournament (which is just for fun). NetHack is one of the earliest roguelikes. It was released in 1987, yet it’s difficulty and complexity make it widely played to this day. I wouldn’t put its artistic merit in the same camp as those earlier posts which focus on traditional aspects like story, music, and visuals. Don’t get me wrong. You better be familiar with classic literature and mythology if you play this. This ranges from basic things like don’t look at Medusa or you’ll be turned to stone to more obscure things like don’t touch that cockatrice or you will turn to stone. Overall, the internal story is not its strong point though.

I think the reason to play roguelikes in general and NetHack in particular is what it teaches you about the impermanence of all things. This is such an important life lesson that many Eastern religions make this a focal point. For example, take Tibetan Buddhists. They have a ritual where they painstakingly craft an incredibly detailed sand mandala. The work takes days. Then they destroy it.

Many modern roguelikes downplay the pain of permadeath by making the game fairly short. If you die, then you lose 20-40 minutes of work in a worst-case scenario. NetHack embraces the philosophy of permadeath. You can put in 12 hours or more carefully crafting a character. You are extremely careful to be fully mindful at all moments. You’ve thought about all 40,000 keystrokes that you’ve made to this point.

Then maybe you get impatient and press left one too many times and die. Maybe you stop paying careful attention for just a tiny moment. Maybe you just didn’t realize that a sea creature could grab you and pull you in. Maybe it is completely out of your control and a black dragon spawns due to pure randomness and blasts you with disintegration. All your work is gone forever.

Maybe I just think about stuff too much, but when this happens to me it really forces me to confront the idea of impermanence. All of the scenarios I just listed have corresponding counterparts in real life. Maybe you are a really careful driver, but in a moment of impatience you don’t look both ways and you are in a serious accident. It only took that tiny moment. Maybe you didn’t realize that the intersection gave the other person the right of way. Maybe you were careful, and randomness put you in a position where the other person was drunk.

The point is that your actions and choices have consequences that are sometimes irreversible. Randomness has consequences that are sometimes irreversible. Just as the Buddhist ritual teaches you this lesson and let’s you think about it before the consequences are real, NetHack also teaches you this lesson. This may seem silly to people who haven’t experienced putting a whole day of effort into something that gets lost forever, but it really makes you think about these issues and which of your choices led you there.

I know. It’s just a game. But that is my case for why you should play roguelikes; especially long and involved ones like NetHack. They force you to confront the consequences of risky decisions. They force you to encounter impermanence. They teach you about regretting what happens when you are careful for hours and then you impatiently slip up. They teach you what type of person you are when these things happen. This is their artistic value, and it is a type of experience that is hard to find in more traditional art forms.


Leave a comment

Critical Theory through If on a winter’s night a traveler

I recently read Italo Calvino’s If on a winter’s night a traveler. Before trying to explain what made this book so entertaining for me to read, I’ll try to sketch an outline of the book if you haven’t heard about it. The overall form consists of alternating chapters. Half the chapters are in second person and refer to “you” the reader. It tells you how you are reading the other half of the chapters and what you are doing between reading those.

The other half of the chapters consists of “short stories” which are fragments of novels. Thus the whole book is in a sense a novel, because it has one overarching story in which you are the protagonist. But it is also a book of short stories which runs the gamut with style and genre. The frustrating thing is that you keep getting stopped right when each story starts to get interesting. There is no closure. The reasons for being interrupted start becoming weirder and sillier (and we’ll see there is good reason for that).

It starts with a bad binding. You go to the store to replace it. Every time you keep getting what you think is the full version of the book only to find out that it is actually a different book. One time you are in a college seminar and the seminar only needs part of the book to do their analysis, so no one has the full thing. By the end, the reasons become much stranger as you enter a Kafka-esque prison situation. The absurdity of the reasons and even conspiracy behind it should keep a smile on your face. As you approach the end of the book, it reads like Pynchon.

Let’s answer an easy question first. What’s up with the title? Part of what is nice about the form of the book is that it tells you what to think sometimes. The book as a whole is a commentary on the falseness of novels. Classical novelists try to give you the sense that what they write is a neat and tidy story. There is a beginning, a middle, and an end. In reality, you are just getting a snippet of the character’s lives.

Calvino writes this explicitly near the middle of the book, “Everything has already begun before, the first line of the first page of every novel refers to something that has already happened outside the book.” The book could almost be read as satire in how it comically exaggerates this point by giving you a bunch of fragments of books that never amount to anything. This is the point of the title. All of the books get cut off with no sense of closure, so why not emphasize the point by making a title that feels cut off?

I think basically everyone that reads this book will have gotten that far. They will be aware of how the literary devices fit right in with other postmodern works of that time (late 70’s early 80’s). It is subtly self-referential and comments on what you are reading as you read it. People will probably pick up on the fact that the book is filled with imitation. Allusions to Borges with infinite regressions, labyrinths, and huge libraries are all over the place.

I can tell this will be a long post, because at this point I haven’t even started commentating on what I think most people will miss. I now want to argue that the book takes you on a historical tour of critical theory by example. By this I mean that each segment presents a different mode of reading a text and theory behind the relationship between writer and reader. As you move through the book, you see the evolution of these ideas.

The book starts with a very simplistic and intuitive approach which can be linked back to Aristotle’s Poetics. The writer writes a book, and the reader reads it. Novels consist of mythos, ethos, etc. Good books make you feel something, and this is catharsis. The book doesn’t use these terms, but “you” the reader essentially describe the reading process with another character in classical pre-modern critical terms (plot, character, etc).

Soon you go to a place where books are made and your simple philosophy of reading starts to become confused. “Now you understand Ludmilla’s refusal to come with you; you are gripped by the fear of having also passed over to ‘the other side’ and of having lost that privileged relationship with books which is peculiar to the reader: the ability to consider what is written as something finished and definitive, to which there is nothing to be added, from which there is nothing to be removed.” This is the start of the problem of hermeneutics maybe as seen by Heidegger and Gadamer. The book starts introducing these early problems of getting at meaning and whether authorial intent is important in interpretation.

We then start moving on to the “New Criticism.” We get to something along the lines of Wimsatt and Beardsley’s famous essay “The Intentional Fallacy.” The main character starts to believe that meaning comes from the reader. Calvino writes, “If you think about it, reading is a necessarily individual act, far more than writing. If we assume that writing manages to go beyond the limitations of the author, it will continue to have a meaning only when it is read by a single person and passes through his mental circuits.”

We then start moving on to the structuralism of Levi-Strauss. In “The Structural Study of Myth” he shows that you can put texts into categories based on which mythological structure it follows. Calvino writes, “What is the reading of a text, in fact, except the recording of certain thematic recurrences, certain insistences of forms and meanings?” This is a succinct way of summarizing that essay.

Then we get a parody of the Derrida school and the deconstructionist response. This comes in the form of giving such a close reading that the text gets pulled apart into just a list of the words that appear most frequently. This part of the book is pretty interesting, because as is noted, you feel that you do have a good sense of what the book is about based on merely a close, fragmented study of the words it uses.

Then we move on to the school of Deleuze and postmodernism. This is where foundations were ripped apart. In what I imagine to be a parody of the dense, confusing style of these writers, Calvino writes, “Perhaps my true vocation was that of author of apocrypha, in the several meanings of the term: because writing always means hiding something in such a way that it then is discovered; because the truth that can come from my pen is like a shard that has been chipped from a great boulder by a violent impact, then flung far away; because there is no certitude outside falsification.”

By the end, Calvino starts to backpedal a bit. Despite being a book without conclusions, I think he wants to take this quick tour through the critical tradition and pull out of the endless trap it sets up. His conclusion is interesting, because it seems to foreshadow the “New Historicists” which wasn’t a movement at the time he wrote this. He writes, “The conclusion I have reached is that reading is an operation without object; or that its true object is itself. The book is an accessory aid, or even a pretext.”

It would be interesting for someone to take the time and make a more convincing argument that this is what he is doing. I think a much stronger case can be made, and even a finer tuning of the trends in thought can be found. Since this is merely a blog post, I didn’t have the space or energy to do that. Examples that I think fit would be to add in Lacan/Freud, Marx, and Adorno/Horkheimer.


1 Comment

How Bad is External Fragmentation under a First-Fit Algorithm?

Sometimes I read about operating systems, because I want to know how my computer works. I’ve run Linux distributions as my primary OS for about 10 years, so having a familiarity with these concepts allows me to greatly customize and optimize my system. One topic that the general public is probably interested in is “What is fragmentation, and why am I supposed to defrag my computer sometimes?”

What I’m going to describe is not how memory management is actually done on any modern operating system, but it will give a feel for why fragmentation happens (and what this even is). I realized this really simple algorithm would be easy to program a simulation for, so we can see just how bad it can get.

The first-fit algorithm is exactly what it sounds like. You visualize memory as a bar that fills up. Each program has a size, and you look for the first hole big enough for it to fit. Here’s a picture I made to illustrate:

What’s going on? You download the blue program, and it goes into the first area no problem. You download the red, and the first available hole is right next to the blue. Same for the green. Then you realize you don’t want the red anymore, so you delete it. This leaves a hole. When you download the purple program your memory management notices the hole and tries to fit it there. Woops. The hole was too small, so it moves on to find the first hole that it fits in (hence the name “first-fit algorithm”). Eventually you may delete the green as well, and something might go in the big hole leftover from the red and green, but it probably won’t fill the whole thing.

Once these holes start appearing, they are hard to get rid of. This is external fragmentation. I wrote a simulation of this to see just how bad it can get. Although the program does nothing more than the simple thing I just showed, it is kind of long and confusing, so let’s take it one step at a time.

from matplotlib import pyplot as pp
import numpy as np
import random

def randomSize():
	return random.randint(1,10)

def randomTime():
	return random.randint(1,20)

class Program(object):
	def __init__(self):
		self.size = randomSize()
		self.position = -1
		self.time = randomTime()
		
	def getSize(self):
		return self.size
	def getPosition(self):
		return self.position
	def getTime(self):
		return self.time

	def decrement(self):
		self.time -= 1

I made two classes. The first one allowed me to pretend like I was downloading some program. Each program came with a random size from 1 to 10, the position where it was placed into memory, and the amount of time until I deleted it (a random number from 1 to 20). I’m still really confused about whether or not it is proper python style to use getter methods rather than just directly accessing attributes…

class Memory(object):
	def __init__(self, size):
		self.size = size
		self.contents = []

	def getContents(self):
		return self.contents
	def numPrograms(self):
		return len(self.contents)
	def getSize(self):
		return self.size

	def checkHole(self, i, p):
		return p.getSize() <= self.contents[i+1].getPosition() - (self.contents[i].getPosition() + self.contents[i].getSize())

	def addProgram(self, p):
		n = self.numPrograms()
		tmp = 0
		if n == 0:
			self.contents.append(p)
			p.position = 0
		elif n == 1:
			if self.contents[0].getPosition() >= p.getSize():
				p.position = 0
				self.contents.append(p)
			else:
				self.contents.append(p)
				p.position = self.contents[0].getSize()
		else:
			if self.contents[0].getPosition() >= p.getSize():
				p.position = 0
				self.contents.append(p)
				tmp = 1
			if tmp == 0:
				for i in range(n-2):
					if self.checkHole(i,p):
						self.contents.append(p)
						p.position = self.contents[i].getPosition() + self.contents[i].getSize()
						tmp = 1
						break
			if tmp == 0:
				if p.getSize() <= self.getSize() - (self.contents[n-1].getPosition() + self.contents[n-1].getSize()):
					self.contents.append(p)
					p.position = self.contents[n-1].getPosition() + self.contents[n-1].getSize()

	def sort(self):
		self.contents = sorted(self.getContents(), key=lambda p: p.getPosition())

	def removeProgram(self):
		for p in self.getContents():
			if p.getTime() == 0:
				self.contents.remove(p)

I’m sorry that this is probably the worst python code you’ve ever seen. This class is just the stick of memory. It has a total size that we will take to be 100 in our simulations and it has a list of the programs that are currently in memory (the programs keep track of their positions). I made a helper function that checks whether or not a given program fits into the hole between the i-th and (i+1)-th programs.

Originally I thought this helper function would make the code much simpler, but it is almost entirely useless which is why this looks so bad. Then I made the function that adds a program into the memory. This just looks for the first available hole to stick it into. Unfortunately, I ended up not seeing a slick uniform way to do it and cobbled together something by cases. I check the hole before the first program separately and the hole after the last program separately.

I made a sort method, because when I appended a new program to the list it was just stuck at the end despite having a position that could be in the middle. This just orders them by position. Lastly, I have a program that gets removed if enough time has passed.

def simulate(memSize, numTimeSteps):
	m = Memory(memSize)
	tmp = 0
	for i in xrange(numTimeSteps):
		m.addProgram(Program())
		for p in m.getContents():
			p.decrement()
		m.removeProgram()
		m.sort()
	for p in m.getContents():
		tmp += p.getSize()
	print float(tmp)/memSize
	memArray = []
	for i in xrange(len(m.getContents())-1):
		memArray.extend([1 for j in range(m.getContents()[i].getSize())])
		memArray.extend([0 for j in range(m.getContents()[i+1].getPosition()-(m.contents[i].getPosition() + m.contents[i].getSize()))])
	memArray.extend([1 for j in range(m.getContents()[m.numPrograms()-1].getSize())])
	memArray.extend([0 for j in range(100- len(memArray))])
	x = [i for i in range(100)]
	ymin = [0 for i in range(100)]
	a = np.array(memArray)
	pp.vlines(x, ymin, a)
	pp.ylim(0,1)
	pp.show()
	
simulate(100, 100)

Lastly I just simulate for 100 time steps. At each time step I try to stick in one randomly generated program. I then make all the time-to-deletion parameters decrease. I check whether any are 0 and remove those. I sort the list, and then repeat.

Here are some of the final results:

frag1

This one had 66% memory use, so external fragmentation led to about 34% waste.

frag2

This one was slightly better at 69% use.

frag3

This one was quite a bit worse at 43% use and hence 57% waste. The full code can be found here. It might be fun to play around with different sizes of memory and lengths of time to delete stuff. Fragmentation will obviously not be a huge problem if the amount of space you try to use is sufficiently less than your total memory.

On the other hand, as these simulations show, the answer to the title question is that it can be pretty bad. These simulations are actually accurate (if my textbook is correct), because you can mathematically prove that the expected space you waste with this algorithm is roughly 33% (I ran a lot more simulations than I showed you and this was what I usually saw).

There is no need to fear, though. Old operating systems did use some variant on this, but all modern operating systems use something called “paging” which completely avoids external fragmentation (there is something called internal fragmentation that I didn’t talk about). I hope you enjoyed this change of pace. I had fun doing it. It may look like it was a lot of work, but the program was done in one sitting. If anyone knows how to make an “animation” where you can watch the whole process unfold that might be cool. I googled a bit, and couldn’t find an obvious way to do it.

Update 5/17/14: I pushed a cleaned up version of this code to github. It is amazing what one night of sleeping on something will do. I got rid of most of the indices using itertools, and I realized that the proper helper function was to just make a method that returns the right endpoint of the position of a program in memory. Then finding a hole size was just subtracting and could be reused everywhere.

Just for fun, I decided I would “prove” to you that this algorithm is only bad when you use a sufficiently large amount of memory (i.e. probably never on a modern computer!). Here’s such a simulation result where your programs are too small for the wasted space to ever matter:

frag4


2 Comments

The Myth of a Great Seminar

Sometimes I peruse the debates at Intelligence Squared to see if any catch my eye. There was one this time that seemed really interesting to me. It was a debate on whether or not MOOCs are reasonable replacements for actual in-class and campus college experiences. You can see the full thing here.

This was interesting to me, because I’ve actually gone through a few MOOCs from start to finish and found them to be extremely good experiences. I was curious if there was research that would be mentioned about the effectiveness of one or the other. The debate was pretty disappointing in this regard. The main anti-MOOC argument was based around how wonderful small seminars are and that you can’t get this in a MOOC. That’s why I want to write a response to this mythical seminar.

Before talking about why I think such seminars don’t really exist in this Platonic, pristine state at any university, I want to first address the fact that the existence of seminars at all is pretty mythical. I decided to check the University of Washington’s Spring 2014 schedule. The senior level literature classes had a student range of 25-40, but most were about 30. Should I consider a 30 person class a “small seminar?” I get it. We’re a gigantic school, so I fully admit that small liberal arts colleges probably do have a lot of small seminars. But most students at most schools will graduate with few to no small seminars as their classes.

Even middle level courses like Introduction to the Theory of Literature at Ivy League schools are gigantic. That class probably has 100 students or more in it, and those are the types of courses that are offered as MOOCs. I think the comparison is a bit disingenuous when you take some capstone seminar and compare it to an “intro” MOOC. The MOOC side of the debate also responded to this criticism and pointed out that some MOOCs offer small group breakout sessions which actually do simulate small seminars. So the point doesn’t even stand.

Now that that rant is over, let’s pretend like the comparison is fair. Here are some of the myths I heard and why I think they are mostly myth (I’ll grant that maybe a few seminars run according to plan):

Let’s suppose for the sake of argument that the teacher is practically invisible in this mythical seminar and the students are all enraptured in high level critical conversation about Dostoevsky or some such nonsense. This seems to be the ideal the seminar aspires to. This is going to sound extremely cynical, but just how interesting can this conversation actually be? The seminar is going to be made up of an incredibly homogeneous group. Everyone is going to be about 20, never having had to make a living. They are all educated at the same school, which means they have roughly the same cultural experience, read the same books, and developed the same theories about how to analyze books.

What’s so great about this perfect conversation in comparison with a MOOC? When you take the exact same course as a MOOC, you will probably have a math professor in India, a farmer in the American midwest, a retired middle school teacher in Scotland, etc. The conversation about the same books is going to be infinitely more interesting and enlightening, because the perspectives will be so varied.

Now let’s back up a little from the perfect situation and get a little more realistic. We’ve all been to these seminar classes before. The free-flowing and enlightening conversation essentially never happens. You have some people who didn’t read the stuff. You have people who aren’t very good at articulating their thoughts on the spot. The whole thing usually turns into the professor calling on someone, a brief sentence or two is mumbled, and then the professor carries on along that point. The “conversation” is forced, and the student input is more like a prompt for the professor to riff on.

Depending on the day and material, the degree to which this is the case will vary, but I think the overall sentiment is what happens most days in most seminars. This is actually why I think a written discussion board in a MOOC is actually a far better method for discussion than a conversation in a seminar.

First off, there are hundreds of more topics and conversations going on at a discussion board than in class. This means that you can search around for conversations that you really want to participate in. Second, you have to write your thoughts down. This gives you time to figure out what you are going to say rather than awkwardly spewing out some muddled nonsense while everyone stares at you. It also gives you time to figure out what other people mean before responding to them.

It is amazing the number of times you start typing a response, and then when you go back to what was actually said you realize you misunderstood at first. Which brings me to my next point. A discussion board records all of it. You can continually return to conversations as your understanding of a topic develops. The conversation doesn’t end at the end of the hour. Once you leave the physical setting of a seminar, it probably only takes a few hours to forget most of what most people said. The discussion board allows you to go back whenever you want to recall certain parts of certain conversations.

To summarize, I think most courses most people take are not seminars, so it is pointless to use them as a main argument against MOOCs. I also think that the MOOC setup is actually a better platform for enlightening discussion in almost every respect than an actual seminar. That being said, I think the anti-MOOC side has a point when they say that communication skills are developed in class discussion. Unfortunately, even small seminars tend not to have real “discussions,” so I don’t find that compelling (along with the fact that some MOOCs are incorporating small group live chat sessions now).

Don’t get me wrong. I don’t think all university education should be relegated to the online setting. I’m just saying that using some idealized small seminar as the main argument is a highly flawed way to go about it.


1 Comment

Correlation Does not Imply Causation

I’ve never done this before in six years and well over 400 posts. I’m going to direct your attention to a webpage rather than write a post. As they say, “A picture is worth 1000 words,” so consider this a 1000 word post:

The full page is here.

This is exactly why it is so dangerous to conclude a relationship from statistically significant correlations. Even phenomena with direct known causal relationships tend not to have 0.99 correlation. Peruse the rest of that webpage at your own risk. It is quite addicting (who knew that so many people died from getting tangled in their bed sheets every year?).


Leave a comment

Rorty’s Pragmatism

Today I’d like to talk about Richard Rorty. He was an American philosopher that became famous in the late 70’s and 80’s for advocating a new form of pragmatism. I thought this might be a timely topic, because we’ve been spending a lot of time on making sense of data. Modern society has become polarized on a bunch of issues which basically stem from more fundamental questions: what is knowledge and what is truth?

On the one side we have radical scientism. This side argues that in order to count something as knowledge, it must be falsifiable, formulated as a scientific hypothesis, and demonstrated with 95% certainty. There are of course much milder variants on this side. For example, one might stipulate that all questions that naturally have a scientific formulation must meet scientific standards before we consider it to be reliable information, but science doesn’t have much to say about non-scientific questions.

The other side is radical skepticism or postmodernism (I know these are not at all the same thing). The radical skeptics claim that all knowledge is impossible, so we should be skeptical of all things that we hear (even if they were proven by a scientific study). I have a lot of sympathy for this side. Facebook alone makes me skeptical of basically anything anyone says, because I know that half of the interesting things I’m told probably come from a totally false Facebook post someone made. Everyone has bias and/or funding which skews results including supposedly objective scientific ones.

Postmodernism gives a bit more substance to this argument. It essentially says that we have no foundations anymore. Science can’t prove that science is getting at truth, so we shouldn’t treat it as a special class of knowledge. This “lack of foundations” argument ends up giving merit to a lot of dangerous ideas. Since the scientific method is no longer seen as the most reliable way to truth, maybe new age spirituality or alternative medicine actually works and is just as effective.

I’ll state my bias right up front. I tend to agree with the scientism viewpoint (although I’d probably call my stance “naturalism,” but let’s not get into that). Both sides make really good critiques of the other when done by a careful thinker. Science has assumptions that cannot be justified. It is merely building models. Maybe our model of gravity is totally wrong, but just happens to consistently give really accurate predictions when tested.

Science critiques the other positions as well. Skepticism is not self-consistent, because it requires you to be skeptical of skepticism. The lack of foundations in postmodernism does not mean that all things are equally likely to be true.

These differing foundations manifest in huge shouting matches: evolution vs intelligent design, medicine vs alternative medicine, atheism vs theism, and on and on. The main reason I err on the side of science is because all people seem to think that science provides the best answers until those answers disagree with their previously held beliefs. It is only then that the lack of foundations is pointed out or the bias of the researcher is brought up. See also this post which shows why the scientific method is needed to surpass bias and this post for an ethical reason to err on the side of science.

Anyway, we’ve passed 500 words already and I’m still just setting up why Rorty is such an important thinker. His views seem to just gain importance as data sets keep getting bigger and we get confused about who we should believe. Rorty basically comes up with a middle ground which is sometimes called neopragmatism. He entered the scene at a time where both sides seemed right and wrong. His position is that the postmodernists are right that there are no foundations, but this doesn’t matter because some systems are useful. Let’s unpack this a bit.

First off, if this interests you, then go read Philosophy and the Mirror of Nature. A quick blog post cannot do it justice. It is quite complex and subtle. One side says that they’ve built a fantastic pillar called science on the solid foundations of peer review, objectivity, etc. The other side says that all our institutions can be knocked down, because there are no solid foundations.

Rorty has a somewhat shocking response that both sides are wrong. There are no foundations (i.e. external objective standards), but this doesn’t mean the pillars are unstable. It just means that the rules of the game depend on which game we’re playing. When playing tennis, we must follow the rules of tennis. When doing science, we must play by the rules of science. There is no universal, correct rule set for all games. It is just dependent on the game. That’s okay. None are more “right” than another, because this concept doesn’t even make sense.

So what is truth? Rorty says that we can think about justification, but not about truth. How we justify beliefs is dependent on the system we are in. We know how to use the word true in each system, so we don’t have to define it. This is a very classic pragmatic response. When speaking of scientific truth, we have a collection of things we mean. When speaking of literary truth we have another. These truths are dependent on time and place (e.g. “It is a truth universally acknowledged, that a single man in possession of a good fortune must be in want of a wife.”)

So how is this different from the extreme relativism of postmodernism? Well, Rorty would say that usefulness has to be taken into account. There is no way to get at objective truth, but some systems are more useful for certain purposes than others. For example, at this point in time, science seems to be the most useful system to answer scientific questions. Your computer is working, polio was eradicated, we put people on the moon, etc, etc. As the internet meme goes, “Science. It works, bitches!” And so even though we don’t know if science is getting at truth (which reasonable scientists fully admit, by the way), it does consistently get at something useful. There may be other contexts in which scientific rigor is not the most useful system.

Rorty develops a theory that fully admits that the postmodernists are right when they say that we have no basis for foundations anymore. But he doesn’t descend into extreme relativism. He leaves room for some systems of thought to be more useful than others. They don’t have a monopoly on truth, because we don’t even know what that means. Relativism doesn’t even really make sense from Rorty’s viewpoint, because you can never leave your current context from which to make a relative judgement. And that’s why I think he’s so important. He points out that our shouting matches aren’t about content or truth. They are about coming at the same question from different systems.


Leave a comment

Fun with Decision Theory

I’ve done quite a few decision theory posts at this point, and I think I’m mostly done with it. So to conclude the section I thought I’d leave you with some fun thought experiments having to do with decision theory. You can use your new skills to try to analyze them.

The first thought experiment I want to present has been around since at least the late 60’s. It is generally referred to as Newcomb’s paradox. Here’s the setup. Suppose you encounter a strange being in the forest that can predict your decisions (it’s telepathic or something, just go with it for the purposes of the thought experiment).

They offer you a deal. They present Box A which contains $1,000, and they present Box B which contains either $0 or $1,000,000. You are allowed to take Box B by itself or both Box A and Box B. The being predicts what choice you will make to determine the contents of Box B. If you take only Box B, then they put the $1,000,000 in it. If they predict that you will take both, then they put $0 in. All of this is done ahead of time (because they also correctly predicted that you would walk through this random area of the forest).

An important part of the setup is that the predictor puts the money in ahead of time, so the contents are not determined after you make a decision. The contents cannot change. There are only four total possibilities, so if you use your decision theory skills, then it should be a pretty straightforward calculation to figure out how to maximize your profit. Strangely, this is often referred to as a paradox, because two equally valid sounding arguments lead to different answers.

Here’s one analysis. Suppose this being thinks you will pick both boxes. If you actually pick B, then you get nothing. If you actually pick both, then you get $1,000. Thus picking both gets you a better result in that case. Suppose it thinks you will only pick B. Then if you actually pick both, you get $1,001,000. If you only pick B, then you only get $1,000,000. Thus picking both leads to a better result in that case as well. In fact, picking both clearly gets you more money no matter what the prediction was. Thus picking both maximizes your profit.

The other analysis says that the first one ignored vital information. We can throw out two possibilities, because by the assumption of the thought experiment the prediction will never be wrong. Thus the only two possibilities are that you pick both, in which case you get $1,000 or you pick B in which case you get $1,000,000. Therefore picking only B maximizes your profit.

I won’t present any of the attempted resolutions of this, because I’ve given you some tools to think about it on your own for awhile. I’ll just say that if you Google this, then you will find that tons of famous philosophers and mathematicians have attempted to resolve it. So answers are really easy to find if you get stuck or are curious to read more about it. If you aren’t sure where to start, I highly recommend stuff that Eliezer Yudkowsky has written on it. I dare say he has probably thought about this more deeply than most people.

Another fun and related issue is the idea of acting randomly being the best decision. Suppose you are playing a game in which if you make moves at random, then you have a 1/2 probability of winning. If your opponent can guess what your next move will be, then you only have a 1/4 chance of winning. Games like these are pretty easy to construct, but telling you one isn’t as important as the fact that it has this feature.

In such a situation, if you run your decision theory algorithm and come up with a deterministic set of moves to make that maximizes your chance of winning, then you will almost surely lose. This is because your opponent could figure out what moves you need to make to win and hence figure out which moves you are going to make. In such a situation, the only way to maximize your chance of winning is to ensure that you never make moves according to some rule that your opponent could figure out, i.e. picking a move at random maximizes your chance of winning.

In some sense, if you make your decision according to some random mechanism external to yourself, then you prevent the game from becoming a “Newcomb-like problem.” In fact, some people try to resolve the Newcomb problem with such randomness. Anyway, I thought it would be fun to end this series with something a little lighter.

Follow

Get every new post delivered to your Inbox.

Join 169 other followers