Critical Postmodern Readings, Part 1: Lyotard

I’m over nine years into this blog, so I think most readers know my opinions and worldview on many issues in philosophy. I roughly subscribe to a Bayesian epistemology, and in practical terms this amounts to something like being a rational humanist and skeptic.

I believe there is an objective world and science can get at it, sometimes, but we also have embodied minds subject to major flaws, and so we can’t experience that world directly. Also, with near 100% probability, we experience many aspects in a fundamentally different way than it “actually” exists. This puts me somewhat in line with postmodernists.

I believe there are valid and invalid ways to interpret art. This puts me in stark contrast to postmodernists. Postmodernism, as a school of thought, seems to have made a major comeback in academic circles. I’ve also written about the dangers posed by these types of ideas. For more information, search “philosophy” on the sidebar. These opinions have been fleshed out over the course of tens of thousands of words.

I first read famous postmodernists and proto-postmodernists like Baudrillard, Foucault, Lyotard, Derrida, Hegel, and so on as an undergrad (i.e. before this blog even existed). At that time, I had none of the worldview above. I basically read those philosophers with the reaction: “Whoa, dude, that’s deep.” I went along with the other students, pretending to understand the profound thoughts of continental philosophy.

I’ve never returned to them, because I didn’t think they were relevant anymore. I kind of thought we were past the idea of “post-truth.” Now I’m not so sure. This whole intro is basically a way to say that I want to try to tackle some of these texts with a more critical approach and with the added knowledge and experience I’ve gained.

I know this will ruffle a lot of feathers. Part of postmodernists “thing” is to dismiss any criticism as “you’re not an expert, so you just don’t understand it.” That’s fine. I’m going to make an honest effort, though, and if you love this stuff and think I’m misunderstanding, let me know. I’m into learning.

Today we’ll tackle Jean-François Lyotard’s The Postmodern Condition: A Report on Knowledge. This is arguably the most important work in the subject, and is often cited as the work that defined “postmodernism.” Since I’ve already wasted a bunch of space with the setup, we’ll only cover the Introduction for now. I recall having to read the Introduction for a class, and I’m pretty sure that’s the extent we covered Lyotard at all.

The Introduction is primarily focused on giving an explanation of what Lyotard means by “the postmodern condition,” and how we know we are living in it. There is something important and subtle here. The section is descriptive rather than prescriptive. Modern (liberal arts) academia tends to think in prescriptive terms. We’ll get to that later.

I guess I’ll now just pull some famous quotes and expound on them.

Science has always been in conflict with narratives.

I don’t think this is that controversial. He’s saying science is one narrative for how we arrive at knowledge. The narrative might be called the Enlightenment Values narrative. It’s based on empiricism and rational argument.

This narrative is so pervasive that we often forget it is a narrative. We usually equate science with knowledge, but these values didn’t always exist in the West. There is a substantial body of work from Descartes to Kant that had to make the case for rationality and empiricism as a foundation for knowledge. That’s the definition of a narrative.

The fact that science comes into conflict with other narratives should be readily obvious. There are science vs religion debates all the time to this day. Lyotard also points out another vital concept we often overlook. There are lots of institutions and political forces behind what we call science, and each of these has its own metanarrative that might come into conflict with forming knowledge.

I define postmodern as incredulity toward metanarratives. This incredulity is undoubtedly a product of progress in the sciences: but that progress in turn presupposes it.

This is a bit deeper than it looks, but only because I know the context of Lyotard’s writing. Taken with the first quote above, one might just think that he’s saying the progress of science has led to people questioning the metanarratives of their lives, like the religion they were brought up in.

Part of the reason Lyotard has chosen the term “postmodern” to describe this condition is because of the artistic movements known as postmodernism. The utter destruction of World War I and World War II brought a destabilization to people’s lives.

Technology created this destruction, and it was fueled by science. Not only did people question the traditions they were brought up in, but they began to question if science itself was good. Much of the postmodern art produced in the decades after WWII focused on highly disjointed narratives (Lost in the Funhouse), the horrors of war (Gravity’s Rainbow), involved utter chaos and randomness (Dadaism), or emphasized futility and meaninglessness (Waiting for Godot).

All these aspects overthrew narratives and traditions. They weren’t just radical because of the content, they often questioned whether we even knew what a novel or a play or a poem or a piece of music was. If we no longer knew what these longstanding artistic forms and narratives were, how could we trust any of the narratives that gave our life meaning?

And I’ll reiterate, there is a pretty direct link from the science that brought the destruction to this “postmodern condition” people found themselves in.

The rest of the Introduction gets pretty jargony.

Where, after the metanarratives, can legitimacy reside?

There is a danger that people will seize upon any stabilizing force once in this position. Authority figures can even ride this to power (we just watched this happen in the U.S.). They tell us stories that make sense and make us feel better, so we put them in power. This is an endless cycle, because once in power, they control the narrative.

How do we form truth and knowledge in such a society? That is the subject of Lyotard’s book and is not answered merely in the Introduction.

I’ll end today’s post by pointing out something very important. Lyotard seems to believe in truth and knowledge and science. He seems concerned by people’s rejection of these concepts due to the postmodern condition.

When people self-describe themselves as a postmodernist, they tend to mean they reject the notion of truth. They say that all we have are narratives, and each is equally valid. Maybe this is because Lyotard isn’t a postmodernist? He merely describes what is going on.

I think more likely it’s that this label has changed from descriptive to prescriptive. Current postmodernists think of the postmodern condition as being good. If science starts to dominate as a narrative, these people want to reject that. In some sense they see this as “liberation” from the “imperialist white capitalist patriarchy” that has dominated the West and caused so much suffering.

I’m very curious to see if these attitudes actually crop up in the writings of postmodernist philosophers or if the this view is some corruption of these thinkers.

What is an Expert?

I’ll tread carefully here, because we live in a strange time of questioning the motives and knowledge of experts to bolster every bizarre conspiracy theory under the sun. No one trusts any information anymore. It’s not even clear if trusting/doubting expert opinion is anti/hyper-intellectual. But that isn’t the subject of today’s topic.

I listen to quite a few podcasts, and several of them have made me think about expertise recently.

For example, Gary Taubes was on the Sam Harris podcast and both of them often get tarred with the “you don’t have a Ph.D. in whatever, so you’re an unknowledgeable/dangerous quack” brush. Also, Dan Carlin’s Hardcore History podcast is insanely detailed, but every ten minutes he reminds the audience “I’m not a historian …”

Many people who value the importance of expertise think that the degree (the Ph.D. in particular but maybe an MFA for arts stuff) is the be-all-end-all of the discussion. You have the Ph.D., then you’re an expert. If you don’t, then you’re not.

The argument I want to present is that if you believe this, you really should be willing to extend your definition of expertise to a wider group of people who have essentially done the equivalent work of one of these degrees.

Think of it this way. Person A goes to Subpar University, scrapes by with the minimal work, kind of hates it, and then teaches remedial classes at a Community College for a few years. Person B has a burning passion for the subject, studies all of the relevant literature, and continues to write about and develop novel ideas in the subject for decades. I’d be way more willing to trust Person B as an expert than Person A despite the degree differences.

Maybe I’ve already convinced you, and I need not go any further. Many of you are probably thinking, yeah, but there are parts to doing a degree that can’t be mimicked without the schooling. And others might be thinking, yeah, but Person B is merely theoretical. No one in the real world exists like Person B. We’ll address each of these points separately.

I think of a Ph.D. as having three parts. Phase 1 is demonstration of competence of the basics. This is often called the Qualifying or Preliminary Exam. Many students don’t fully understand the purpose of this phase while going through it. They think they must memorize and compute. They think of it as a test of basic knowledge.

At least in math and the hard sciences, this is not the case. It is almost a test of attitude. Do you know when you’re guessing? Do you know what you don’t know? Are you able to admit this or will you BS your way through something? Is the basic terminology internalized? You can pass Phase 1 with gaps in knowledge. You cannot pass Phase 1 if you don’t know where those gaps are.

Phase 2 is the accumulation of knowledge of the research done in your sub-sub-(sub-sub-sub)-field. This basically amounts to reading thousands of pages, sometimes from textbooks to get a historical view, but mostly from research papers. It also involves talking to lots of people engaged in similar, related, or practically the same problems as your thesis. You hear their opinions and intuitions about what is true and start to develop your own intuitions.

Phase 3 is the original contribution to the literature. In other words, you write the thesis. To get a feel for the difficulty and time commitment of each step, if you do a five year Ph.D., ideally Phase 1 would be done in around a year, Phase 2 is 2-4 years, and Phase 3 is around a year (there is overlap between phases).

I know a lot of people aren’t going to like what I’m about to say, but the expertise gained from a Ph.D. is almost entirely the familiarization with the current literature. It’s taking the time to read and understand everything being done in the field.

Phase 1 is basically about not wasting people’s time and money. If you’re going to not understand what you’re reading in Phase 2 and make careless mistakes in Phase 3, it’s best to weed those people out with Phase 1. But you aren’t gaining any expertise in Phase 1, because it’s all just the basics still.

One of the main reasons people don’t gain Ph.D.-level expertise without actually doing the degree is because being in such a program forces you to compress all that reading into a small time-frame (yes, reading for three years is short). It’s going to take someone doing it as a hobby two or three times longer, and even then, they’ll be tempted to just give up without the external motivation of the degree looming over them.

Also, without motivating thesis problem, you won’t have the narrow focus to make the reading and learning manageable. I know everyone tackles this in different ways, but here’s how it worked for me. I’d take a paper on a related topic, and I’d try to adapt the techniques and ideas to my problem. This forced me to really understand what made these techniques work, which often involved learning a bunch of stuff I wouldn’t have if I just read through it to see the results.

Before moving on, I’d like to add that upon completion of a Ph.D. you know pretty much nothing outside of your sub-sub-(sub-sub-sub)-field. It will take many years of continued teaching and researching and reading and publishing and talking to people to get any sense of your actual sub-field.

Are there people who complete the equivalent of the three listed phases without an actual degree?

I’ll start with the more controversial example of Gary Taubes. He got a physics undergrad degree and a masters in aerospace engineering. He then went into science journalism. He stumbled upon how complicated and shoddy the science of nutrition was, and started to research a book.

Five years later, he had read and analyzed pretty much every single nutrition study done. He interviewed six hundred doctors and researchers in the field. If this isn’t Phase 2 of a Ph.D., I don’t know what is. Most students won’t have gone this in-depth to learn the state of the field in an actual Ph.D. program.

Based on all of this, he then wrote a meticulously cited book Good Calories, Bad Calories. The bibliography is over 60 pages long. If this isn’t Phase 3 of a Ph.D., I don’t know what is. He’s continued to stay abreast of studies and has done at least one of his own in the past ten years. He certainly has more knowledge of the field than any fresh Ph.D.

Now you can disagree with his conclusions all you want. They are quite controversial (but lots of Ph.D. theses have controversial conclusions; this is partially how knowledge advances). Go find any place on the internet with a comments section that has run something about him and you’ll find people who write him off because “he got a physics degree so he’s not an expert on nutrition.” Are we really supposed to ignore 20 years of work done by a person just because it wasn’t done at a University and the previous 4 years of their life they got an unrelated degree? It’s a very bizarre sentiment.

A less controversial example is Dan Carlin. Listen to any one of his Hardcore History podcasts. He loves history, so he obsessively reads about it. Those podcasts are each an example of completing Phase 3 of the Ph.D. And he also clearly knows the literature as he constantly references hundreds of pieces of research an episode off the top of his head. What is a historian? Supposedly it’s someone who has a Ph.D. in history. But Dan has completed all the same Phases, it just wasn’t at a university.

(I say this is less controversial, because I think pretty much everyone considers Dan an expert on the topics he discusses except for himself. It’s a stunning display of humility. Those podcasts are the definition of having expertise on a subject.)

As a concluding remark/warning. There are a lot of cranks out there who try to pass themselves off as experts who really aren’t. It’s not easy to tell for most people, and so it’s definitely best to err on the side of the degree that went through the gatekeeper of a university when you’re not sure.

But also remember that Ph.D.’s are human too. There’s plenty of people like Person A in the example above. You can’t just believe a book someone wrote because that degree is listed after their name. They might have made honest mistakes. They might be conning you. Or, more likely, they might not have a good grasp on the current state of knowledge of the field they’re writing about.

What is an expert? To me, it is someone who has dedicated themselves with enough seriousness and professionalism to get through the phases listed above. This mostly happens with degree programs, but it also happens a lot in the real world, often because someone moves into a new career.

On Google’s AlphaGo

I thought I’d get away from critiques and reviews and serious stuff like that for a week and talk about a cool (or scary) development in AI research. I won’t talk about the details, so don’t get scared off yet. This will be more of a high level history of what happened. Many of my readers are probably unaware this even exists.

Let’s start with the basics. Go is arguably the oldest game in existence. And despite appearances, it’s one of the simplest. Each player takes a turn placing a stone on the intersections of a 19×19 board. If you surround a stone or group of stones of your opponent, you capture them (remove them from the board). If you completely surround other intersections, that counts as your “territory.”

The game ends when both sides pass (no more moves can be made to capture or surround territory). The side that has more territory + captures wins. There’s no memorization of how pieces move. There’s no rules to learn (except ko, which basically says you can’t do an infinite loop causing the game to never end). It’s really that simple.

And despite the simplicity, humans have continued to get better and produce more and more advanced theory about the game for over 2,500 years.

Let’s compare Go to Chess for a moment, because most people in the West think of Chess as the gold standard of strategy games. One could study chess for a whole lifetime and still pale in comparison to the top Grand Masters. When Deep Blue beat Kasparov in 1997, it felt like a blow to humanity.

If you’re at all in touch with the Chess world, you will have succumb to the computer overlords by now. We can measure the time since Deep Blue’s victory in decades. The AI have improved so much since then that it is commonly accepted across the whole community that a human will never be able to win against a machine at Chess ever again.

A few years ago, we could at least have said, “But wait, there’s still Go.” To someone who doesn’t have much experience with Go, it might be surprising to learn that computers weren’t even close to winning against a human a few years ago.

Here’s the rough idea why. Chess can be won by pure computation of future moves. There is no doubt that humans use pattern recognition and positional judgment and basic principles when playing, but none of that stands a chance against a machine that just reads out every single combination of the next 20 moves and then picks the best one.

Go, on the other hand, has pattern recognition as a core element of the strategy. One might try to argue that this is only because the calculations are so large, no human could ever do them. Once we have powerful enough computers, a computer could win by pure forward calculation.

As far as I understand it, this is not true. And it was the major problem in making an AI strong enough to win. Even at a theoretical level, having the computer look ahead a dozen moves would generate more combinations than the number of atoms in the known universe. A dozen moves in Chess is half the game. A dozen moves in Go tells you nothing; it wouldn’t even cover a short opening sequence.

Go definitely has local sections of the game where pure “reading ahead” wins you the situation, but there is still the global concept of surrounding the most territory to consider. It’s somewhat hard to describe in words to someone unfamiliar with the game what exactly I mean here.

san-ren-sei-opening

Notice how on the right the black stones sort of surround that area. That could quickly turn into territory by fully surrounding it. So how do you get an AI to understand this loose, vague surrounding of an area? One could even imagine much, much looser and vaguer surrounding as well. Humans can instantly see it, but machines cannot and no amount of a calculating further sequences of moves will help.

For years, every winter break from college, I’d go home and watch famous and not-so-famous people easily win matches against the top AI. Even as late as 2014, it wasn’t clear to me that I’d ever see a computer beat a human. The problem was that intractable.

Along came Google. They used a machine learning technique called “Deep Learning” to teach an AI to develop these intuitions. The result was the AlphaGo AI. In March 2016, AlphaGo beat Lee Sedol, arguably the top Go player in the world. It was a five game sequence, and AlphaGo won 4-1. This gave humanity some hope that the top players could still manage a match here and there (unlike in Chess).

But then the AI was put on an online Go server secretly under the name “Master.” It has since played pretty much every single top pro in the world. It has won every single game with a record around 60-0. It is now believed that humans will never win against it, just like in Chess.

More theory has been developed about Go than any other game. We’ve had 2,500 years of study. We thought we had figured out sound basic principles and opening theory. AlphaGo has shaken this up. It will often play moves that look bad to a trained eye, but we’re coming to see that many of the basics we once thought of as optimal are not.

It’s sort of disturbing to realize how quickly the machine learned the history of human development and then went on to innovate it’s own superior strategies. It will be interesting to see if humans can adapt to these new strategies the AI has invented.

Those Words Are Different?

Here’s a list of words I routinely have to look up. Many of these I used incorrectly until quite recently, because I didn’t even realize they were different. A few others I’ve seen other people use incorrectly, so they were on my mind. We won’t rehash the to/too/two or there/their/they’re nonsense, because everyone knows those are different even if they mess it up sometimes. These are words many people don’t even realize are different words.

Lull vs Loll:

Lull means to put to sleep.
Ex: I lulled the baby to sleep.

This is easy to remember, because you can think lullaby.

Loll means to recline or dangle loosely.
Ex: The baby’s head lolled to the side as I lulled him to sleep.

The more common mistake seems to be writing “lull” when “loll” should be used.

Clamber vs Clamor:

Clamber means to climb up with all your body parts.
Ex: I clambered up the fire pole at the first sound of the alarm.

Clamor is an outcry or loud noise.
Ex: The protesters clamored their demands.

This distinction also has a trick. Climb has a “b” and so does clamber, so clamber means to climb. I’m not sure the error happens one way or the other more often, because it’s not clear to me most people even realize these are different words.

Pour vs Pore:

Pour means to dump a liquid, usually onto or into something else.
Ex: I poured myself a glass of orange juice for breakfast.

Pore means to gaze or study with much attention.
Ex: I pored over the photograph of a person pouring orange juice for a clue to the mystery.

I think the trick here is to remember that pore is a word. It seems to me most people use “pour” for everything without realizing the other one exists and is different. If you do confuse them, pour has a “u” just like dump and liquid.

 

Palate vs Palette:

Palate is the roof of your mouth.
Ex: You have a refined palate to be able to distinguish Merlot from Cabernet by taste alone.

Palette is the board you mix paint on.
Ex: Bob Ross sets up his palette carefully before he begins any painting.

I must admit that I wrote a whole short story about a painter where I accidentally used “palate” everywhere. I caught it upon revision, but I was alarmed at how unaware of this I was. I’ve yet to come up with an easy way to remember the difference, but this is probably another case of being aware that “palette” exists.

Flare vs Flair:

Flare refers to a bright light.
Ex: The motor on the boat died, so we used an emergency flare to signal help.

Flair refers to a talent or style.
Ex: My job as a server requires me to wear thirty-seven pieces of flair on my uniform.

The most common place I see this misused is in the expression: she has a flair for writing. Do not use “flare” in that case. Otherwise, I think people mostly know these are different words and what the difference is.

Cattle vs Chattel:

Cattle are bovine livestock, in other words, a group of cows.
Ex: I trained my dog to herd the cattle.

Chattel is mostly a legal term referring to movable possessions.
Ex: My cattle are my most valuable chattel.

Pretty much no one misuses cattle and pretty much no one has a need to use chattel, so you’re probably safe here. Various unsavory internet message boards can get them confused. For example, 19th century English Common Law had married women as legal chattel of their husband (this was called coverture). If you bring this up while arguing on the internet, it’s best not to use the word “cattle.”

For the record, they both derive from the Middle English “chatel,” meaning “personal property.”

Gantlet vs Gauntlet:

One “runs the gantlet” for punishment, and one “throws down the gauntlet” as a challenge. Let’s not dwell on this or argue over it. These are expressions, and the words are rarely used outside of those two expressions. And yes, the famous 1985 arcade game was misnamed.

All right vs Alright:

This is a trick! “Alright” is not a word. Always use “all right” when you feel yourself about to write “alright.”

I can think of a few more, but they fall more into the “I know they’re different but can’t remember which is which” category (born/borne, hoard/horde, tortuous/torturous, etc). I wanted to keep this to post to words many people might not realize are different at all.

Year of Short Fiction Part 5: The Call of Cthulhu

Somehow I went my whole life without reading a single thing by H.P. Lovecraft. Since we’re still doing short fiction from the early 20th century, I decided to rectify that. I’m not much of a reader of horror, but there’s certainly a lot any writer can learn by studying the genre. And let’s face it, The Call of Cthulhu is one of the most important works of horror to every be written both from a literary and cultural perspective.

There is a joy in experiencing this story with little knowledge of the plot, so I’ll word things in a vague way to keep the secrets untold.

The first thing to jump out at me was the dense prose style. The first two sentences already indicate this is not your average pulp genre writing:

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.

I had to look up a few words in the first pages, though some of these might have been more standard back when it was written (e.g. bas-relief). These opening lines set up much to come. The main character has to piece together various found stories to get the full picture (i.e. “correlate all its contents”). Later we will get a scene set on infinite black seas. So these lines had full intention behind them to set up later parts of the story.

I was a little surprised by how real it was. One might say it is written in a hyperrealist style. The level of detail provided is almost distracting. At times, it was hard to remember the story was fiction instead of reading actual travel logs and notes by people. There are many names, and each of these people have precise degrees and jobs and even full addresses (7 Thomas St., Providence, R.I) associated with them.

In other places, we’re given exact coordinates of various sightings: S. Latitude 34° 21′, W. Longitude 152° 17′. This gives the reader precise information about the settings of various events, but at the same time, it’s kind of useless unless you pull yourself out of the story to Google it (as I did). These details mostly serve the purpose of making everything as real as possible.

This story really hits upon one of the things I wanted to encounter when I started the series. There’s close to a full novel’s worth of material in it, but it’s somehow packed tightly into a single short story.

This hyperrealism is part of what makes this possible. Instead of getting lots of lengthy “show don’t tell” descriptions that usually flesh out a single moment into a full short story, Lovecraft presents several detailed fragments that the reader must piece together on her own. In this way, we get years of events in a few pages, and it all feels natural since we’re just reading a few primary sources along with the main character.

This makes it hard to tell exactly what is happening, but this is done to give the reader the same experience as the narrator, who also doesn’t know what’s happening.

And now we’re in horror. It’s often said that the most suspenseful and horrifying things are those things we can’t see or understand. The structure of the story brilliantly puts you in the unsettled feeling of the unknown. It opens with a vague description based on a symbolic representation of the monster:

If I say that my somewhat extravagant imagination yielded simultaneous pictures of an octopus, a dragon, and a human caricature, I shall not be unfaithful to the spirit of the thing.

This cleverly lets the reader’s mind run wild over the first half of the story about what exactly this Cthulhu is. Lovecraft proceeds to add mystery upon mystery: sudden deaths, cults, people going mad, and conspiracy. It’s somewhat brilliant in how it continuously adds suspense without resolving earlier mysteries.

Lovecraft keeps you guessing with that unsettled feeling. Is the main character interpreting this correctly? Is he putting together a set of unrelated things? Is he going mad? Or maybe, worst of all, he’s right, and all of this has been hidden from the rest of us.

Overall, I think a lot can be learned from studying this story. The dense and flowing prose style is impressive on its own. I may have to do a whole “Examining Pro’s Prose” on it. Moreover, the tension and forward motion Lovecraft creates through mystery and hidden information is excellent. Lastly, he brilliantly packs in so much information through the use of non-linear structure.

 

Elements of Writing that Annoy Me Part 2

I wrote the first of these something like three years ago. Maybe I’m just in a bad mood or the writing I read really is getting worse, but certain things have been getting on my nerves a lot. It’s time to pick this up again!

  1. Not trusting your reader. This is a typical flaw of first-time novelists. They have a beautiful idea and execute it in a clever, original way, but they are so fearful the reader will miss what they’ve put all this work into that they overdo it.

It’s like if someone were to tell you a joke, you laugh, and then they say, “Did you get it? Here, let me tell you why it was funny.” There’s never a reason to do this. If someone didn’t get your art without you telling them, then it failed. Telling them what it’s about doesn’t fix that. For everyone else, they already got it, so there is no need to re-explain it.

The example that jumps out to me the most is the movie A Single Man. I thought this movie was brilliant when it came out, but the ending made me cringe a little. A new character comes in right at the end and explains it all to you. I haven’t seen it since it was in theaters, so maybe I’d feel differently now.

The other way this manifests is in thoughts and exposition. I hate when a book explains how a character feels right after it was demonstrated.

Sally yelled, “I hate you!” Fred annoyed her so much, and she was beginning to hate him.

That’s obviously not a real example, and I exaggerated it to illustrate the point. But I’ve seen things almost this bad.

2. Alliteration. I have a theory about alliteration. When you’re in a flow state of writing, the brain makes a lot of weird connections. So when you get to a noun like “book” and you want more description, the brain naturally jumps to something like “boring” or “bothersome” or “bad.”

I have no evidence to support this theory. I’ve noticed in my own writing that this is when it tends to creep in. Don’t get me wrong. Alliteration is a literary device that can be used to great effect when done right. But if you find it in a first draft, it should pretty much never make it to the final draft. It was probably an accident.

I view the misuse of alliteration to be a mistake on par with a grammar mistake. I know this sounds unfair, since it’s only a prose style error. It falls under the category known as “diction.” I’m not sure why standards have gotten so lax in this category. You will never find this error in great writers of the past, but it’s everywhere now.

It’s hard to say what annoys me so much about it. I think it’s some combination of thinking about why it happens. It’s either laziness on the writer’s part or lack of knowledge on the writer’s part or laziness/lack of knowledge on the editor’s part or the writer ignoring the editor’s advice. All of these are pretty annoying reasons.

3. Semi-dangling modifiers. Okay. I made this up. It’s not a real thing. If a book is traditionally published, it should go through an editor good enough to not allow any actual dangling modifiers. A dangling modifier is when you start a sentence with a clause that modifies a subject not actually present in the sentence.

An example: Having eaten a large breakfast, lunch was unappetizing. The first clause has an implied person as its subject. The second clause has “lunch” as its subject. This is an easy fix: Having eaten a large breakfast, I found lunch unappetizing. Now the implied subject of the modifying clause matches the subject of the sentence.

Beginning with modifying clauses in general can be grating. If this were in something I was editing, I would strongly suggest the change: I found lunch unappetizing, because I ate a large breakfast. It converts the sentence from passive to active voice, and it clarifies the logic.

Now I’m going to pick on a real book to illustrate what I mean by “semi-dangling modifiers.” I’ve been reading The Bees by Laline Paull, and she does this all the time. I don’t want to pick on her too much, because I actually see this in a lot of what I read. I just happen to have that book on my desk right now. Chapter 21 begins with this sentence:

Shocked at her own act, Flora was among the first out.

When I read this, I had no idea what act it referred to, because I had put the book down at the chapter break. But let’s not dwell on that (this might be against 1 in trusting your reader too much by starting a chapter with a reference to the last event of the last chapter).

The modifier is not dangling, because Flora is the subject of the sentence. I call this “semi-dangling,” because the clause has no logical connection to the main sentence. When a sentence begins with any clause, it is implied that the sentence could be rearranged in a way to make it clear how the clause contains relevant information to the rest of the sentence. In the example I gave above, we learned why the I found lunch unappetizing.

In this example, the clause could be deleted without losing anything, and so it should be deleted! It’s semi-dangling in the sense that the clause itself never refers to something relevant to the rest of the sentence.

People, stop semi-dangling your modifiers. If the clause is irrelevant, delete it. If it is important information but has no logical connection to the res of the sentence, make it a whole new sentence.

Prismata Review

A few month’s ago I reviewed a game by David Sirlin called Codex. It is an attempt to convert a real-time strategy game, like StarCraft, into a card game. And it actually does a really good job (see the post for details).

I’ll try to not talk about StarCraft very long, because the words will be indecipherable to anyone who hasn’t played it (which is probably 99% of people reading this). There is a really old and interesting question about the game: if you strip away everything but the strategy aspect, is it still an interesting game?

This may sound weird to people unfamiliar with the game, because, well, it’s a real-time “strategy” game.

The first ridiculous thing when starting StarCraft is how much there is to learn. There’s probably close to a 100 hotkeys you have to know. There’s the tech tree structure. There’s around 60 units, each of which you should know cost, types of attacks, damage, health, shields, and what the spell-casting abilities are. Knowing those things, you’ll need to learn what counters what and why.

And you might be thinking, but I’ll just click through stuff during a game to find the information. There’s no need to memorize it. That brings up the other crazy aspect of the game: apm (actions per minute). You are going to have to have 200-300 apm (i.e. clicking or pressing a keyboard key 5 times per second on average for an entire 15+ minute match), so you just don’t have time to look stuff up during a match even though that information is available:

 

If you’re not a StarCraft player, hopefully you’re getting a sense of why the question doesn’t have an obvious answer. You have to play for months just to internalize the hotkeys and learn enough to get to the point of forming any sort of strategy.

If you strip out the memorization; If you strip out dividing the opponent’s attention and distracting them; If you strip out the fog of war; If you strip out having to execute 5 actions per second perfectly for an entire match: is there an interesting strategy game left? In other words, is the winner just someone who clicks faster?

Codex went a long way to answering that question in the affirmative. Sirlin brilliantly left in an aspect of the fog of war and tech trees. But the fact that it is a card game messes with the answer a little. There’s still some luck and some blind countering and some memorization to know what possible answers your opponent will have.

Okay, so this post is supposed to be about Prismata. To me, Prismata gives us a near perfect game for answering the question. There is absolutely no hidden information. All the units and their costs and their abilities are listed on the side at the start of the game. A beginner can play matches with slow enough time controls to carefully read all of this and formulate a plan before making moves.

As soon as your opponent buys a unit, it goes onto the board. So there is no random hidden information of shuffling it into a deck like Codex. Despite it’s appearance, Prismata is NOT a card game. There is no deck or randomness in gameplay at all.

The only randomness is in what units you are allowed to choose from during setup, and I think this is absolutely brilliant. In traditional strategy games like Chess or Go or even StarCraft, there are set openings that one must memorize to play at the top level because every game starts the same. This takes the strategy out of the opening.

In Prismata, every game is different. You have to look at the board you’ve been given and start planning a strategy on Turn 1. It’s a really exciting and fresh idea for a strategy game. It’s like if Chess or Go started with some randomized board state. You couldn’t go into a game with a plan to play a Queen’s gambit or the Kobayashi opening or something. You have to develop a plan on the fly based on the board. It’s a true battle of skill.

Before this review gets too far, I have to bring up the last comparison to Codex. Codex is a card/board game. There is no real online way to play. I played quite a bit by forum, and this might be tolerable for some people. The community is certainly very active, and you won’t have trouble finding a match. But it brought too much fatigue for me, and I stopped liking it for awhile.

Prismata is computer only (eventually through Steam and a separate client and web browser, though I’m not sure if all will continue to be supported after Steam release). If Codex had a computer version, it might compete for my attention. As it is, it’s a game that is played in person, occasionally.

Prismata has an excellent set of tutorials and basic bots and “story” to play through to get a newcomer up to speed. The game looks horrifically complicated, but it is actually very easy to learn and difficult to master. I promise if you play through the basic stuff, you’ll have a full grasp of the basics and even have a few basic strategic ideas. Do not be intimidated by a cluttered screenshot if this game sounds at all interesting to you.

Prismata is a game for people who like strategy and/or card games but who don’t like some of the ridiculous aspects of both. Many strategy games have too much hidden information to make good decisions or too much technical execution to execute a strategic plan. And card games, well, the online ones at least have way too much randomness. There’s also that super annoying way card games completely change every few months when new cards get released and you have to dump a ton of money into it to stay relevant.

Did I mention Prismata is true free to play? Since it’s not a card game, you’ll be playing the real game every game. Neither side will have an advantage merely from grinding out hundreds of hours or paying hundreds of dollars to unlock some legendary thing.

Right now, if you want to try it, you’ll need to request an alpha tester key here. It should release on Steam very soon, though, and I promise to reblog this with the link at the top to remind anyone interested.

The Carter Catastrophe

I’ve been reading Manifold: Time by Stephen Baxter. The book is quite good so far, and it presents a fascinating probabilistic argument that humans will go extinct in the near future. It is sometimes called the Carter Catastrophe, because Brandon Carter first proposed it in 1983.

I’ll use Bayesian arguments, so you might want to review some of my previous posts on the topic if you’re feeling shaky. One thing we didn’t talk all that much about is the idea of model selection. This is the most common thing scientists have to do. If you run an experiment, you get a bunch of data. Then you have to figure out the most likely reason for what you see.

Let’s take a basic example. We have a giant tub of golf balls, and we can’t see inside the tub. There could be 1 ball or a million. We’re told the owner accidentally dropped a red ball in at some point. All the other balls are the standard white golf balls. We decide to run an experiment where we draw a ball out, one at a time, until we reach the red one.

First ball: white. Second ball: white. Third ball: red. We stop. We’ve now generated a data set from our experiment, and we want to use Bayesian methods to give the probability of there being three total balls or seven or a million. In probability terms, we need to calculate the probability that there are x balls in the tub given that we drew the red ball on the third draw. Any time we see this language, our first thought should be Bayes’ theorem.

Define A_i to be the model of there being exactly i balls in the tub. I’ll use “3” inside of P( ) to be the event of drawing the red ball on the third try. We have to make a finiteness assumption, and although this is one of the main critiques of the argument, we can examine what happens as we let the size of the bound grow. Suppose for now the tub can only hold 100 balls.

A priori, we have no idea how many balls are in there, so we’ll assume all “models” are equally likely. This means P(A_i)=1/100 for all i. By Bayes’ theorem we can calculate:

P(A_3|3) = \frac{P(3|A_3)P(A_3)}{(\sum_{i=1}^{100}P(3|A_i)P(A_i))}

\frac{(1/3)(1/100)}{(1/100)\sum_{i=3}^{100}1/i} \approx 0.09

So there’s around a 9% chance that there are only 3 balls in the tub. That bottom summation remains exactly the same when computing P(A_n | 3) for any n and equals about 3.69, and the (1/100) cancels out every time. So we can compute explicitly that for n > 3:

P(A_n|3)\approx \frac{1}{n}(0.27)

This is a decreasing function of n, and this shouldn’t be surprising at all. It says that as we guess there are more and more balls in the tub, the probability of that guess goes down. This makes sense, because it’s unreasonable to think we’d see the red one that early if there are actually 100 balls in the tub.

There’s lots of ways to play with this. What happens if our tub could hold millions but we still assume a uniform prior? It just takes all the probabilities down, but the general trend is the same: It becomes less and less reasonable to assume large amounts of total balls given that we found the red one so early.

You could also only care about this “earliness” idea and redo the computations where you ask how likely is A_n given that we found the red ball by the third try. This is actually the more typical way the problem is formulated in the Doomsday arguments. It’s more complicated, but the same idea pops out, and this should make intuitive sense.

Part of the reason these computations were somewhat involved is because we tried to get a distribution on the natural numbers. But we could have tried to compare heuristically to get a super clear answer (homework for you). What if we only had two choices “small number of total balls (say 10)” or “large number of total balls (say 10,000)”? You’d find there is around a 99% chance that the “small” hypothesis is correct.

Here’s the leap. Now assume the fact that you exist right now is random. In other words, you popped out at a random point in the existence of humans. So the totality of humans to ever exist are the white balls and you are the red ball. The same type of argument above applies, and it says that the most likely thing is that you aren’t born at some super early point in human history. In fact, it’s unreasonable from a probabilistic standpoint to think that humans will continue much longer at all given your existence.

The “small” total population of humans is far, far more likely than the “large” total population, and the interesting thing is that this remains true even if you mess with the uniform prior. You could assume it is much more likely a priori for humans to continue to make improvements and colonize space and develop vaccines giving a higher prior for the species existing far into the future. But unfortunately the Bayesian argument will still pull so strongly in favor of humans ceasing to exist in the near future that one must conclude it is inevitable and will happen soon!

Anyway. I’m travelling this week, so I’m sorry if there are errors in those calculations. I was in a hurry and never double checked them. The crux of the argument should still make sense even if you don’t get my exact numbers. There’s also a lot of interesting and convincing rebuttals, but I don’t have time to get into them now (including the fact that unlikely hypotheses turn out to be true all the time).

Year of Short Fiction Part 4: Breakfast at Tiffany’s

Breakfast at Tiffany’s is one of those weird cultural staples that literally everyone has heard of it. Most people over a certain age have probably seen the movie, but ask them what it’s about, and they probably have no idea. It’s kind of fascinating to think how a novella/film gets to such a point. I can’t even think of another cultural phenomenon of this type.

I was pretty excited going into this for a few reasons. I, too, had seen the movie enough years ago to not remember it. Oh, there’s the long cigarette, and a crazy cat, and a wacky party girl, and singing “Moon River,” but what was it about? What was the plot? The other reason I was excited was that Truman Capote’s In Cold Blood is one of two books that have ever made me cry. The way he writes is breathtaking.

The first thing to jump out at me was the vulgarity of the language. It was published in 1958, so we’ve moved past short fiction that hides indiscretions. But I still must imagine this novella pushed what was acceptable for the time. It openly talks about prostitution and homosexuality and a 14-year-old girl getting married to an adult man. Plus, Holly’s language is very direct and crude (I don’t recall if she swears, though).

Lolita came out a few years before Breakfast at Tiffany’s, and Tiffany’s doesn’t compare in disturbing imagery to that. So I guess I shouldn’t have been too surprised. It had more to do with tone than imagery, though.

The novella is basically a long character study, and it does an excellent job at this. Holly has to be one of the strangest characters of all time. Capote’s attention to detail is incredible. Almost every sentence that has Holly in it is crafted to expose some tiny piece of how her mind works. An early example is that the location on her business card is: traveling.

At first, it comes off as chaos. Nothing about the character makes sense, and the sentences she speaks come out in a stream-of-consciousness level of confusion. But then, by about halfway or so, she’ll do something weird, and you find yourself thinking: that’s so Holly. There appears to be a deep internal logic to it. Holly feels very real and knowable.

The plot itself is fairly melodramatic. It goes by at rapid-fire pace. This short novella has Holly being in love with and engaged to several people. She travels to probably a dozen places, often not even in the U.S. There’s parties. She’s involved with a scheme to smuggle drugs orchestrated by a man in prison. She gets pregnant and miscarries. It’s almost impossible to take stock of all that happens in this, and there’s almost no real emotion behind any of it.

Capote clearly did this on purpose. Holly’s character is flighty, and she often jumps into things without any thought. If we think of the novella as a character study, then all these crazy events occurring is part of the brilliance of the novella. The plot doesn’t have weight for the main character, so it would be a mistake to have the events play a significant role to the reader. Holly moves on, and so should the reader.

And now we come full circle. No one remembers the plot to Breakfast at Tiffany’s by design. We’re only meant to remember Holly. Even her last name is “Golightly.”

The only moments of emotional poignancy are when the narrator reflects on it all, and when we see beneath Holly’s shell. He falls in love with Holly for real (this is a bit of a theme to the book: what is love?). This is quite well done, because it contrasts so starkly with Holly’s indifference and shows how devastating her indifference can be as she tears through people’s lives.

Capote gives Holly one piece of depth that prevents her from being some caricature of a socialite. She cares deeply about her brother, and it is probably the only real human connection she’s ever had. A lot of her carefree attitude stems from a disturbing fact dropped subtly in tiny details. She runs from human connection because of the psychological trauma of being a child bride.

Overall, the novella was way better than I expected in terms of character development. It was also sort of disappointing in a way. I went in expecting it to be a romance between the narrator and Holly done in a brilliant literary Capote-esque way. It’s not that at all. But once you get over the initial shock (and genre confusion), it’s brilliant.

The Book of the New Sun

It took me three months, but I finally finished The Book of the New Sun by Gene Wolfe. It was published as four novels, but it is clearly one giant novel. Each one practically ends in the middle of a sentence, and none are standalone. There’s so much to say about this, and yet it basically defies talking about.

The initial critical reception was quite good. It was published throughout 1980-1983. So it fits into a transition time for SF/F. The pulps had died off by this point and a lot experimentation happened in the 60’s and 70’s, but the genre hadn’t fully evolved into the literary phenomenon that it would become by the end of the 90’s.

This book is very much ahead of its time in this sense. The Washington Post said Gene Wolfe is “the finest writer the science fiction world has yet produced.” Maybe. But the genre has taken the best of both worlds: fast-paced genre action/adventure/fun and quality literary writing that imparts deeper meaning on subsequent readings.

Anyone who has been reading this blog for any sufficient amount of time will know my views on abstract, difficult, or avant-garde art, especially writing and music. I love it. I love having to dig in and listen to a piece of music 10+ times to start to understand what’s going on.

These types of pieces often give the listener the most rewarding artistic experiences. As DFW once said (I paraphrase), art is a relationship between artist and viewer. Relationships can’t be meaningful if all the work is done by one side. The more you put into experiencing a work of art, the more you get out of it.

Anyway, I won’t rehash that argument any further. My views when it comes to long novels have evolved a bit. There’s something of a difference between getting more on repeated readings and requiring multiple readings. It’s a respect thing. I respect an artist who promises more depth on another visit. An artist is disrespecting my time if I spend three months experiencing their art only to be told at the end that I can’t have understood it on the first time and I absolutely must spend another three months rereading it to make that first time around meaningful.

So that’s where The Book of the New Sun ends. The novel intentionally draws the reader out of the story many times. Two of the most difficult points for me were the long play within the novel in Book 2 and the sequence of short stories told by various characters in Book 4. Yes, I get that they are vital pieces to that underlying secret story that couldn’t be understood the first time. But they’re pretty obnoxious if you aren’t on that second read.

Overall, don’t let this dissuade you from reading these. The first read is pretty good outside of those complaints and a few meandering bits. The futuristic society Wolfe creates is shockingly deep and remains fresh and original today despite the number of dystopian/dying earth novels that have come out since then.

The writing is incredible. Wolfe is often too good I’d say. First off, he has created an SF/F series with a bunch of weird terms that sound oddly fitting. It turns out that every strange word in the book is actually a legitimate English word that has fallen to the wayside of history. This is an incredible idea to create both an ancient, strange sound that also feels very familiar. Same thing with the names of characters. They look all fantasy-like, but they are all names that were common at one point in history but have fallen out of fashion.

The dense, precise writing often challenges the reader to stay in the story rather than contemplate what it says:

War is not a new experience; it is a new world. Its inhabitants are more different from human beings than Famulimus and her friends. Its laws are new, and even its geography is new, because it is a geography in which insignificant hills and hollows are lifted to the importance of cities.

Many genre writers, to the extent that they think about prose, might want to show the horror of war by having the description be short, choppy, and crude like the thing it is describing. How many times have you read something like: “War is hell—horror everywhere. It changes your world.” This is lazy and cliched writing.

Wolfe’s elegant imagery does so much to bring the terror to the readers mind. War is a new world. This hinges on the cliche, but the followup prose doubles down on the imagery by precisely describing the geography of this new world: insignificant hills are lifted to the importance of cities. I get chills when I’m transported to such a devastating world. And then I’m off thinking about this and pulled out of the story. It’s almost a catch-22: write too well and it might be a distraction to the reader. I’m only half joking about this.

The astute reader is presented with some difficulties early on. The narrator claims to have a perfect memory. Later on, we start to get contradictory information about what happened. So either he lied about his memory or he’s lying to us about parts. This isn’t a logic puzzle. We have 100% confidence that the narrator is unreliable at that point, which puts the reader in an awkward position.

Since I recently read Imajica, I was struck by the similarities. I’m pretty sure Barker was not inspired by New Sun, but the archetypes and structure are the same. Barker has the Reconciliation and Wolfe has the Conciliator. I guess these, or similar terms, are bound to come up in any grand savior plot.

Will I reread this? I’m not sure. It won’t be anytime soon for sure. Do I recommend it? I’ll cautiously say yes. It’s very, very good. As Neil Gaiman said, “The best SF novel of the last century.” I’m not willing to go that far.

My main reservation is that you’ll certainly struggle at points, and you might be disappointed that everything changes at the end, requiring another reading. On the other hand, if you want to sink a few years of your life into discovering the hidden depths of an excellently written book, this is probably your best bet (seriously, peruse urth.net for a half hour to see the truth of this).