A Mind for Madness

Musings on art, philosophy, mathematics, and physics


Leave a comment

Thoughts on ToME’s Adventure Mode

I’ve done several posts explaining why I think roguelikes are a great genre of game to play. It is probable that the most important feature of a roguelike for me is permadeath. For example, see this post for reasons why.

If you aren’t up on roguelikes, there are only a handful of games that standout as the “giants” that most people have heard of. One of these is called ToME (aka ToME 4; aka Tales of Maj’Eyal). There are more interesting features in ToME than could fit in a single blog post. Someday I may come back and post about these.

I’ll fully admit that my views on permadeath have evolved a bit, possibly due to my age. I think the older someone gets, the more likely they are to view losing all progress in a game as too punishing to be worth it. You tend to grow out liking the more extreme and hardcore elements of certain games.

Anyway, I stand by my original post. I’ll recall some key points. Permadeath is a great game mechanic, because it forces you to contemplate the consequences of your actions. It gives weight to the game. It makes you become better at it in order to win. You can’t just “save scum” until you get through a particularly difficult section.

Before you take this the wrong way, ToME is possibly the most well-balanced roguelike I’ve played. Every death feels like my own fault and not me getting screwed by the randomness. But when a game involves as much randomness as any of the great classic roguelikes, you are bound to get the occassional unavoidable death that is not your fault.

This becomes more and more pronounced as a game’s design is less thoroughly vetted for imbalances. Part of ToME’s excellent balance comes from people who have put in thousands of hours of play who can spot these things. The developer takes their opinions seriously which makes the game more fair.

ToME has three modes of play: roguelike, adventure, and explore. Roguelike has traditional permadeath. Once your die, you must start the entire game over. Adventure gives you five lives. Once those five are gone, you start the game over. Explore is deathless.

The main point I’ve been contemplating is whether Adventure mode ruins the permadeath experience of a roguelike. This will be a highly controversial statement, but I think it keeps all the original greatness of the mechanic and eliminates the negative aspects.

If you only have five lives, then each one is still precious. You’ll play the early game as if you only have one life, because if you waste one early, you will probably restart anyway. This makes the beginning just as intense as if you only had one life.

Let’s put it this way. If you don’t play as if you only have one life, then you will probably quickly lose them all anyway and revert to roguelike mode. So nothing really changes. In the middle and late game, if you are really good and don’t lose any lives, then it didn’t matter anyway. If you’re like me, you’ll probably be back to one life by that point and get all the benefits of roguelike mode.

It seems to me that Adventure mode merely serves to alleviate the annoyance and waste of time that comes from getting killed in one hit by some out of depth enemy that randomly appeared due to no fault of your own. It keeps all the intensity and pressure of permadeath, but gives some much needed buffer for the extreme amount of randomness of roguelikes.

I’d be quite happy to see some other roguelikes incorporate this as an option, but I’d also be totally understanding if they saw it as a compromise on the quality of the play experience.


Leave a comment

Texas Sharpshooter Fallacy

In the world of big data that constantly bombards us with fancy graphics, the statistical fallacy that I think we are most likely to fall for is called the Texas Sharpshooter Fallacy. What makes this fallacy so dangerous is that it is propped up by solid, correct statistics which can be hard to argue against.

Here’s the idea. A person goes into the yard and shoots their rifle at random at their barn. Maybe even say the person is drunk, so the holes have no underlying pattern to them. The person then goes to the barn and figures out a way to draw a bullseye after the fact that makes it look like they are a competent sharpshooter.

The fallacy is that if you look at a large enough amount of data with good enough visualization tools, you will probably start to find patterns that aren’t actually there by strategically drawing artificial boundaries. Let’s make the example a bit more real.

Suppose you want to better understand the causes of Disease X, something just discovered and occurs in 10% of the population naturally. You plot the data of a nearby town of 10,000 to see if you can find a pattern.

Here is the plot (I used a uniform distribution so we know any clumps have no underlying cause):

sharpshooter

Your eye gets drawn to an oddly dense clump of cases of Disease X. You circle it and then run a statistical test to see if the number of cases is significant. You’re shocked! Your properly run statistical test shows you the increased number of cases is significant and with 95% certainty you conclude it isn’t just a fluke.

So what do you do? You start looking for causes. Of course you’ll be able to find one. Maybe that clump of houses has a power station nearby, or they drink from the same well water source or whatever. When you are looking for something in common, you’ll be able to find it.

When this happens, you’ve committed the Texas Sharpshooter Fallacy. It might be okay to use this data exploration to look for a cause if you merely intend to turn it into a hypothesis to be tested. So you hypothesize that it is radon in the water that caused the spike of cases in that cluster.

Now do real science where you do a randomized controlled study to actually test your null hypothesis. Doing statistics on big data is risky business, because any clever person can construct correlations from a large enough data set that first off may not actually be there but second off is almost surely not causally related.

Another way to think about why this is a fallacy is that when you have 95% certainty, 5 out of 100 times you will falsely find correlation where none exists. So if your data set is large enough to draw 100 different boundaries, then by random chance 5 of those will have false correlations. When you allow your eye to catch the cluster, it is your brain being good at finding patterns. It probably rejected 100 non-clusters to find that one.

This is scary in today’s world, because lots of news articles do exactly this. They claim some crazy thing, and they use statistics people don’t understand to “prove” its legitimacy (numbers can’t lie don’t you know). But really it is just this fallacy at work. The media don’t want to double check it because “Cancer rate five times higher near power station” is going to get a lot of hits and interest.

Actually, cancer is particularly susceptible to this type of fallacy as dozens of examples of these studies getting publicity despite no actual correlation (yet alone causation!) are documented in George Johnson’s (excellent) The Cancer Chronicles or an older The New Yorker article called “The Cancer-Cluster Myth.”

So the next time you read about one of these public health outcries, you should pay careful attention in the article to see if this fallacy has been made. For example, the vaccination causes autism myth also orignated this way.

Probably the most egregious example is The China Study, a highly praised vegan propaganda book. It takes the largest diet study ever done (367 variables) and pulls out the correlations that support the hypothesis “meat is bad.”

What the book doesn’t tell you is that the study found over 8000 statistically significant correlations, many contradicting the ones presented in the book. This is why large studies of observational epidemiology always have to be treated with caution. The larger the study, the more likely you will be able to find a way to support your hypothesis.

If you don’t believe me, and you want to protect marriage in Maine, then make sure you eat less margarine this year:

wuFRozj


Leave a comment

Thoughts on Roth’s American Pastoral

The first time I read Philip Roth’s American Pastoral, I had nothing but criticism for it. I’ll try to set the stage for my first reading. It was my early undergraduate days about 10 years ago.

I had had a fairly sheltered childhood. I grew up in a highly apolitical house. At that point, I had not been of age to vote during a major election, and so the extent of my political knowledge was the ability to name the president.

Despite this, I read the book at the height of my reading career. No offense for the university I attended, but I breezed through (a perfect 4.0 finishing GPA) with almost no work. This meant I supplemented my studies by reading a lot.

By this I mean I sometimes read 2 novels a week. I read Infinite Jest and Gravity’s Rainbow during this time. I wanted to read every book anyone had ever recommended to me or had said was “unreadable” (is that a description or a challenge?).

So what were my complaints? Well, it read like realism, yet nothing struck me as realistic in the book. It seemed filled with hyperbole and extreme character overreaction. Here’s a few of the things I remember saying, but there were probably more:

1. How could anyone be so upset over politics to do something so extreme?
2. How could someone’s perception of someone else be so skewed?
3. How could one event cause someone to change so radically and suddenly?
4. The pacing is too slow.
5. The second half is too bizarrely different from the first to create something coherent.

Anyway, I decided to reread it and was shocked to find how much 10 years can change your perspective. The book is a delicate portrait of how a tragedy wrecked a family’s life.

What I originally perceived as too slow of pacing turned out to be a striking dive into the psyche of a man torn by conflicting and paradoxical emotions. It tries to answer the question: How does one grapple with continuing to love someone after they have done something horrible? It is heartbreaking to witness.

What I originally thought of as radical and sudden change of a character turned out to be a perfectly natural reaction of changing values and priorities. It’s happened to me. It’s happened to people I know. It happens to everyone. With a catalyst of such magnitude as happens in the book, it doesn’t seem at all extreme to me anymore.

I’ve learned a lot about bias and the human mind since last reading the book. Now the inconceivable false perception of someone strikes a chord of truth in me.

In fact, none of my initial criticisms ring true anymore. The book presents all of these complicated human interactions and emotions in a unified, compelling story.

The thing I most love about Roth’s style (at least in the second Zuckerman trilogy) is ever present in American Pastoral. He has the ability to lead you down a somewhat illogical, yet fully natural series of thoughts to land on a beautifully constructed gem of a sentence to contemplate.

It is hard to describe or give an example, because to pull the quote out of context removes how striking it is to read in real time. I often found myself having to stop and contemplate how illuminating the paragraph was. I could always relate to a time when I had a similar thought process. I first noticed this style in The Human Stain which drove me to the other Roth novels.

Needless to say, I loved this book. At a time when our politics seem to be more divided and more extreme than ever, and outrage and violence surrounding it has become more public (the recent Ferguson protests come to mind), a book of such introspection on the topic has only grown in its importance among the rank of American literature of the past fifty years.


Leave a comment

Does Bayesian Epistemology Suffer Foundational Problems?

I recently had a discussion about whether Bayesian epistemology suffers from the problem of induction, and I think some interesting things came from it. If these words make you uncomfortable, think of epistemology as the study of how we form beliefs and gain knowledge. Bayesian epistemology means we model it probabilistically using Bayesian methods. This old post of mine talks a bit about it but is long and unnecessary to read to get the gist of this post.

I always think of the problem of induction in terms of the classic swan analogy. Someone wants to claim that all swans are white. They go out and see swan after swan after swan, each confirming the claim. Is there any point at which the person can legitimately say they know that all swans are white?

Classically, the answer is no. The problem of induction is crippling to classic epistemologies, because we can never be justified in believing any universal claim (at least using empirical methods). One of the great things about probabilistic epistemologies (not just Bayesian) is that it circumvents this problem.

Classical epistemologies require you to have 100% certainty to attain knowledge. Since you can’t ever be sure you’ve encountered every instance of a universal, you can’t be certain there is no instance that violates the universal. Hence the problem of induction is an actual problem. But note it is only a problem if your definition of knowledge requires you to have absolute certainty of truth.

Probabilistic epistemologies lower the threshold. They merely ask that you have 95% (or 98%, etc) confidence (or that your claim sits in some credible region, etc) for the justification. By definition, knowledge is always tentative and subject to change in these theories of knowledge.

This is one of the main reasons to use a probabilistic epistemology. It is the whole point. They were invented to solve this problem, so I definitely do not believe that Bayesian epistemology suffers from the problem of induction.

But it turns out I had misunderstood. The point the other person tried to make was much more subtle. It had to do with the other half of the problem of induction (which I always forget about, because I usually consider it an axiom when doing epistemology).

This other problem is referred to as the principle of the uniformity of nature. One must presuppose that the laws of nature are consistent across time and space. Recall that a Bayesian has prior beliefs and then upon encountering new data they update their beliefs factoring in both the prior and new data.

This criticism has to do with the application of Bayes’ theorem period. In order to consider the prior to be relevant to factor in at all, you must believe it is … well, relevant! You’ve implicitly assumed at that step the uniformity of nature. If you don’t believe nature is consistent across time, then you should not factor prior beliefs into the formation of knowledge.

Now a Bayesian will often try to use Bayesian methods to justify the uniformity of nature. We start with a uniform prior so that we haven’t assumed anything about the past or its relevance to the future. Then we merely note that billions of people across thousands of years have only ever observed a uniformity of nature, and hence it is credible to believe the axiom is true.

Even though my gut buys that argument, it is a bit intellectually dishonest. You can never, ever justify an axiom by using a method that relies on that axiom. That is the quintessential begging the question fallacy.

I think the uniformity of nature issue can be dismissed on different grounds. If you want to dismiss an epistemology based on the uniformity of nature issue, then you have to be willing to dismiss every epistemology that allows you to come to knowledge.

It doesn’t matter what the method is. If you somehow come to knowledge, then one second later all of nature could have changed and hence you no longer have that knowledge. Knowledge is impossible if you want to use that criticism. All this leave you with is radical skepticism, which of course leads to self-contradiction (if you know you can’t know anything, then you know something –><– ).

This is why I think of the uniformity of nature as a necessary axiom for epistemology. Without some form of it, epistemology is impossible. So at least in terms of the problem of induction, I do not see foundational problems for Bayesian epistemology.


2 Comments

7DRL Takeaway Lessons

I promise this will be my last 7DRL post, but I thought I should record some thoughts I’ve had about it now that it is over.

This post could alternately be called “Why I Will Not Use MonoGame in the Foreseeable Future.”

First, the motto of MonoGame is “Write Once, Play Everywhere.” This sounded promising to me, because my first game had issues deploying almost anywhere. Of course, what they mean is that you can play anywhere supported by Windows.

There are ways to get builds onto other platforms so that Mac and Linux users can play, but I only had 7 days, and I’m not a trained computer scientist, so that wasn’t going to happen. This was a little frustrating, and also one Windows user already complained they couldn’t get it installed.

Second, MonoGame is effectively a re-implementation of XNA 4. Unfortunately, some of the pipelines (especially with an up to date Visual Studio) are broken and require hacks to work. This caused major problems with sound. So much so that I scratched all sound from the game.

I know that with enough time, I probably could have gotten this working (because lots of games made with it have sound), but I couldn’t waste the time in those 7 days to fight with it. This was frustrating, because one of the main reasons to use MonoGame was to make all of that streamlined and easy.

I also felt trapped into using Visual Studio as an IDE. This is certainly a fine IDE, but I’m most comfortable with Sublime Text 2. Switching editors wastes a lot of time, especially when you “downgrade.”

By this I mean VS doesn’t have half the great features of ST2 (including my most used feature: multiple cursors). In retrospect, I should have edited in ST2 and built in VS.

All this being said, MonoGame was a good enough framework to produce a full working game in 7 days, so I can’t complain too much. Also, if I were part of a larger team with a longer time frame, many of these issues would disappear. So I admit these complaints come specifically from the 7 day issue.

If I do this again, I will almost certainly try Cocos2d-x. It looks like this will fix all the complaints I had. First, Cocos2d-x is open source. This means I can actually see what the engine is doing if I need to! What a concept.

Second, it is C++, so it will deploy across platforms much easier. Lastly, it isn’t in some transition period like MonoGame where content management seems to use outdated, unsupported pipelines. I’m also more familiar with C++ than C# which would have solved a few headaches.


Leave a comment

Theseus: a 7DRL 2015 Completed!

Final Stats:

One week.
About 90 hand drawn 64×64 pixel tiles.
Over 2000 lines of code.
A completed one-hp strategy roguelike.

Proof that it can be beaten happened during my stream where I tested for bugs (quality isn’t the best):

Actually, I think I’ve finally gotten it balanced so that you should be able to win if you know what you are doing every time except for extraordinarily rare scenarios dictated by the randomness.

7DRL 2015 Success.

[Edit:] I’ve set up an itch page: http://hilbert90.itch.io/theseus
Download the game to play for free here: https://www.dropbox.com/sh/1ywjy7s3y72bq5k/AABOZWsPOFBYs0qgcxW5Ccqxa?dl=0


1 Comment

7DRL: Theseus, Alpha Testing Phase

Alright. I have to leave for the day. I’ve thrown a few more things in this morning at the last minute, and then built the whole game.

If anyone wants to test it and give feedback, that would be much appreciated. I’ll still do a lot of polishing tomorrow.

Sorry, as of right now I have a build that I’m 99% positive will work on any Windows 7 or 8 machine. Earlier probably works, but hasn’t been tested. I haven’t made any others (the opposite of my first game which people could only get to work on Linux).

I’ve learned my lesson. If I do this again, I’ll probably switch to Cocos2d-x, which is open source, and easily builds into any machine.

Types of things I’m looking for:

1. The whole thing crashes. If this happens, try to remember exactly what happened and leave a comment.

2. The setup doesn’t install it on your computer, but you are using Windows. Let me know something about your computer if this happens.

3. The game is too easy (for example, you consistently scale up too quickly and bash your way to victory with no thought).

4. You can let me know if it is too hard, but I probably won’t change that.

5. Anything else that seems reasonable to need a fix for being on Day 6 of developing a game (you can say the art is not polished, but that will just hurt my feelings … it’s Day 6 and I had a lot of other things to do for goodness sake).

6. The Readme didn’t have enough information.

Tonight or tomorrow, I’ll probably add 2 more items for reference of what will be added before the final release.

Thanks. Have fun.

I’ll think of a better way of distributing this later. For now, go here: https://github.com/wardm4/Theseus

Press Download Zip (button to right). Ignore all the files except the one called “Theseus.zip” Right click on that and save it somewhere (your Downloads folder or whatever). Unzip it and then double click on setup.exe. That will prompt a standard install.

Better yet. I’ve added just the zip to a Dropbox here if you don’t want to get the source code too: https://www.dropbox.com/sh/1ywjy7s3y72bq5k/AABOZWsPOFBYs0qgcxW5Ccqxa?dl=0

Follow

Get every new post delivered to your Inbox.

Join 233 other followers