Critical Postmodern Readings, Part 2: Finishing Lyotard

Last time we looked at the introduction to Lyotard’s The Postmodern Condition: A Report on Knowledge. That introduction already contained much of what gets fleshed out in the rest of the short book, so I’m going to mostly summarize stuff until we hit anything that requires serious critical thought.

The first chapter goes into how computers have changed the way we view knowledge. It was probably an excellent insight that required argument at the time. Now it’s obvious to everyone. Humans used to gain knowledge by reading books and talking to each other. It was a somewhat qualitative experience. The nature of knowledge has shifted with (big) data and machine learning. It’s very quantitative. It’s also a commodity to be bought and sold (think Facebook/Google).

It is a little creepy to understand Lyotard’s prescience. He basically predicts that multinational corporations will have the money to buy this data, and owning the data gives them real-world power. He predicts knowledge “circulation” in a similar way to money circulation.  Here’s a part of the prediction:

The reopening of the world market, a return to vigorous economic competition, the breakdown of the hegemony of American capitalism, the decline of the socialist alternative, a probable opening of the Chinese markets …

Other than the decline of the socialist alternative (which seems to have had a recent surge), Lyotard has a perfect prediction of how computerization of knowledge actually affected the world in the 40 years since he wrote this.

Chapter two reiterates the idea that scientific knowledge (i.e. the type discussed above) is different than, and in conflict with, “narrative” knowledge. There is also a legitimation “problem” in science. The community as a whole must choose gatekeepers seen as legitimate who decide what counts as scientific knowledge.

I’ve written about why I don’t see this as a problem like Lyotard does, but I’ll concede the point that there is a legitimation that happens, and it could be a problem if those gatekeepers change the narrative to influence what is thought of as true. There are even known instances of political biases making their way into schools of scientific thought (see my review of Galileo’s Middle Finger by Alice Dreger).

Next Lyotard sets up the framework for thinking about this. He uses Wittgenstein’s “language game” concept. The rules of the game can never legitmate themselves. Even small modifications of the rules can greatly alter meaning. And lastly (I think this is where he differs from Wittgenstein), each speech act is an attempt to alter the rules. Since agreeing upon the current set of rules is a social contract, it is necessary to understand the “nature of social bonds.”

This part gets a little weird to me. He claims that classically society has been seen either as a unified whole or divided in two. The rules of the language games in a unified whole follow standard entropy (they get more complicated and chaotic and degenerate). The divided in two conception is classic Marxism (bourgeoisie/proletariat).

Even if it gets a bit on the mumbo-jumbo side through this part, I think his main point is summarized by this quote:

For it is impossible to know what the state of knowledge is—in other words, the problems its development and distribution are facing today—without knowing something of the society within which it is situated.

This doesn’t seem that controversial to me considering I’ve already admitted that certain powers can control the language and flow of knowledge. Being as generous as possible here, I think he’s just saying we have to know how many of these powers there are and who has the power and who legitimated that power before we can truly understand who’s forming these narratives and why.

In the postmodern world, we have a ton of different institutions all competing for their metanarrative to be heard. Society is more fractured than just the two divisions of the modern world. But each of these institutions also has a set of rules for their language games that constrains them.  For example, the language of prayer has a different set of rules from an academic discussion at a university.

Chapters 7-9 seem to me to be where the most confusion on both the part of Lyotard and the reader can occur. He dives into the concept of narrative truth and scientific truth. You can already feel Lyotard try to position scientific truth to be less valuable than it is and narrative truth more valuable.

Lyotard brings up the classic objections to verification and falsification (namely a variant on Hume’s Problem of Induction). How does one prove ones proof and evidence of a theory is true? How does one know the laws of nature are consistent across time and space? How can one say that a (scientific) theory is true merely because it cannot be falsified?

These were much more powerful objections in Lyotard’s time, but much of science now takes a Bayesian epistemology (even if they don’t admit to this terminology). We believe what is most probable, and we’re open to changing our minds if the evidence leads in that direction. I addressed this more fully a few years ago in my post: Does Bayesian Epistemology Suffer Foundational Problems?

… drawing a parallel between science and nonscientific (narrative) knowledge helps us understand, or at least sense, that the former’s existence is no more—and no less—necessary than the latter’s.

These sorts of statements are where things get tricky for me. I buy the argument that narrative knowledge is important. One can read James Baldwin and gain knowledge and empathy of a gay black man’s perspective that changes your life and the way you see the world. Or maybe you read Butler’s performative theory of gender and suddenly understand your own gender expression in a new way. Both of these types of narrative knowledge could even be argued to be a “necessary” and vital part of humanity.

I also agree science is a separate type of knowledge, but I also see science as clearly more necessary than narrative knowledge. If we lost all of James Baldwin’s writings tomorrow, it would be a tragedy. If we lost the polio vaccine tomorrow, it would be potentially catastrophic.

It’s too easy to philosophize science into this abstract pursuit and forget just how many aspects of your life it touches (your computer, the electricity in your house, the way you cook, the way you get your food, the way you clean yourself). Probably 80% of the developed world would literally die off in a few months if scientific knowledge disappeared.

I’ll reiterate that Lyotard thinks science is vastly important. He is in no way saying the problems of science are crippling. The above quote is more in raising narrative knowledge to the same importance of science than the devaluing of science (Lyotard might point to the disastrous consequences that happened as a result of convincing a nation of the narrative that the Aryan race is superior). For example, he says:

Today the problem of legitimation is no longer considered a failing of the language game of science. It would be more accurate to say that it has itself been legitimated as a problem, that is, as a heuristic driving force.

Anyway, getting back to the main point. Lyotard points out that problems of legitimating knowledge is essentially modern, and though we should be aware of the difficulties, we shouldn’t be too concerned with it. The postmodern problem is the grand delegitimation of various narratives (and one can’t help but hear Trump yell “Fake News” while reading this section of Lyotard).

Lyotard spends several sections developing a theory of how humans do science, and he develops the language of “performativity.” It all seems pretty accurate to me, and not really worth commenting on (i.e. it’s just a description). He goes into the issues Godel’s Incompleteness Theorem caused for positivists. He talks about the Bourbaki group. He talks about the seeming paradox of having to look for counterexamples while simultaneously trying to prove the statement to be true.

I’d say the most surprising thing is that he gets this stuff right. You often hear about postmodernists hijacking math/science to make their mumbo-jumbo sound more rigorous. He brings up Brownian motion and modeling discontinuous phenomena with differentiable functions to ease analysis and how the Koch curve has a non-whole number dimension. These were all explained without error and without claiming they imply things they don’t imply.

Lyotard wants to call these unintuitive and bizarre narratives about the world that come from weird scientific and mathematical facts “postmodern science.” Maybe it’s because we’ve had over forty more years to digest this, but I say: why bother? To me, this is the power of science. The best summary I can come up with is this:

Narrative knowledge must be convincing as a narrative; science is convincing despite the unconvincing narrative it suggests (think of the EPR paradox in quantum mechanics or even the germ theory of disease when it was first suggested).

I know I riffed a bit harder on the science stuff than a graduate seminar on the book would. Overall, I thought this was an excellent read. It seems more relevant now than when it was written, because it cautions about the dangers of powerful organizations buying a bunch of data and using that to craft narratives we want to hear while deligitimating narratives that hurt them (but which might be true).

We know now that this shouldn’t be a futuristic, dystopian fear (as it was in Lyotard’s time). It’s really happening with targeted advertising and the rise of government propaganda and illegitimate news sources propagating our social media feeds. We believe what the people with money want us to believe, and it’s impossible to free ourselves from it until we understand the situation with the same level of clarity that Lyotard did.

Difficult Subject Matter in 90’s Song Lyrics

I don’t want to make one of those click bait “the 90’s had the best music EVER!!” posts. One can find really terrible music and really excellent music in any decade. It would be a futile task to claim one decade had the best music.

I went down a strange rabbit hole the other day, though. I just put up a song on youtube and let the autoplay happen while I worked on some other things. It shifted into some sort of 90’s nostalgia playlist, and I kept hearing very surprising lyrics. They were songs I knew from living through the time, but they handled difficult subject matter in subtle and beautiful ways I hadn’t noticed.

I’d be surprised if songs like these could get on the radio today, but I distinctly remember hearing both of these songs on the radio in the 90’s.

Let’s start with “Round Here” by Counting Crows. First off, I’d like to point out that the song is through-composed, already something that could never happen today. The song appears to be about a depressed girl who attempts suicide. But it’s also about the disillusionment of growing up and finding out all those things you were told in childhood probably didn’t matter.

If you think it’s farfetched to have so much in one “pop” song, listen to it a few times. It’s all in there and more. A quick google search brings up wild, yet convincing, interpretations. This “universality” is the hallmark of great song art. Everyone listens to it and thinks it’s about their experience.

Here’s the opening:

Step out the front door like a ghost
Into the fog where no one notices
The contrast of white on white.
And in between the moon and you
The angels get a better view
Of the crumbling difference between wrong and right.

It opens with a beautiful simile. Sometimes pop songs have similes, but they tend to be funny or ironic. It’s hard to think of any current ones that do the hard work of writing something real. “Like a ghost into the fog” is such apt imagery for the point he’s making. Ghosts are white and ethereal. Fog is white and ethereal. A ghost that steps into fog loses all sense of self and no one else can see the person. They’re lost.

Then angels see a crumbling of the difference between wrong and right. This sort of moral ambiguity is another thing it would be hard to find in today’s lyrics. In the context of one of the interpretations I provided, this is probably in reference to how adults tell children right and wrong with clear certainty. As one grows up, one learns that it’s never that obvious.

The lyrics just keep getting better from there.

Next up is “Freshmen” by the Verve Pipe. This song hit Number 5 on the Billboard Top 100. Fifteen years ago, I thought I understood this song. Now I hear it from a totally different perspective.

Originally, I thought it was about a girl that broke up with the singer and then she killed herself over it. The singer is ridden with guilt. But the lyrics, when carefully analyzed, paint a slightly different picture.

Here’s the opening:

When I was young I knew everything
She a punk who rarely ever took advice
Now I’m guilt stricken,
Sobbing with my head on the floor
Stop a baby’s breath and a shoe full of rice

The singer is a typical Freshmen. He thinks he knows everything. This is part of what has changed for me in the song. I was pretty modest as a Freshmen, but now I can look back and it terrifies me how much I thought I knew. I’ve heard this feeling only gets worse as you age.

The key to the song is given right up front. “Stop a baby’s breath” is a reference to his girlfriend getting an abortion, and how this led to a fight and breakup. “A shoe full of rice” is about how they were even planning on getting married. Again, this is subtle imagery that blows by early on in the song. It requires careful attention if one is to understand the rest of the song.

I can’t be held responsible

This is something he tells himself, but he doesn’t believe it. This is a shift in voice, because it goes from narration of the story to internal thoughts. If one takes this line at face value without understanding this shift, one will misinterpret it. Here’s the chorus:

For the life of me I cannot remember
What made us think that we were wise and
We’d never compromise
For the life of me I cannot believe
We’d ever die for these sins
We were merely freshmen

Here’s another reference to his youthful arrogance. He thought he knew everything, and convinced his girlfriend to get the abortion. He refused to compromise and it destroyed their relationship. If you don’t know this song, it’s worth a listen to the rest. It progressively complicates as the guilt reverberates. He can’t hold other relationships out of fear of it happening again.

There’s something haunting about the reiteration of “we were merely freshmen” at the end of each phrase. When we’re young, we think we can do anything without much lasting consequence, but the singer learns the hard way that one devastating mistake can haunt you forever.

To wrap this up, I want to reiterate that it isn’t the difficulty of the subject matter that I find so amazing about these 90’s hits. Plenty of current hits have difficult subject matter. It’s the delicacy with which the lyrics handle the subject. It’s poetic and abstract so that the feeling comes through but the listener interprets it to apply to their own life.

Critical Postmodern Readings, Part 1: Lyotard

I’m over nine years into this blog, so I think most readers know my opinions and worldview on many issues in philosophy. I roughly subscribe to a Bayesian epistemology, and in practical terms this amounts to something like being a rational humanist and skeptic.

I believe there is an objective world and science can get at it, sometimes, but we also have embodied minds subject to major flaws, and so we can’t experience that world directly. Also, with near 100% probability, we experience many aspects in a fundamentally different way than it “actually” exists. This puts me somewhat in line with postmodernists.

I believe there are valid and invalid ways to interpret art. This puts me in stark contrast to postmodernists. Postmodernism, as a school of thought, seems to have made a major comeback in academic circles. I’ve also written about the dangers posed by these types of ideas. For more information, search “philosophy” on the sidebar. These opinions have been fleshed out over the course of tens of thousands of words.

I first read famous postmodernists and proto-postmodernists like Baudrillard, Foucault, Lyotard, Derrida, Hegel, and so on as an undergrad (i.e. before this blog even existed). At that time, I had none of the worldview above. I basically read those philosophers with the reaction: “Whoa, dude, that’s deep.” I went along with the other students, pretending to understand the profound thoughts of continental philosophy.

I’ve never returned to them, because I didn’t think they were relevant anymore. I kind of thought we were past the idea of “post-truth.” Now I’m not so sure. This whole intro is basically a way to say that I want to try to tackle some of these texts with a more critical approach and with the added knowledge and experience I’ve gained.

I know this will ruffle a lot of feathers. Part of postmodernists “thing” is to dismiss any criticism as “you’re not an expert, so you just don’t understand it.” That’s fine. I’m going to make an honest effort, though, and if you love this stuff and think I’m misunderstanding, let me know. I’m into learning.

Today we’ll tackle Jean-François Lyotard’s The Postmodern Condition: A Report on Knowledge. This is arguably the most important work in the subject, and is often cited as the work that defined “postmodernism.” Since I’ve already wasted a bunch of space with the setup, we’ll only cover the Introduction for now. I recall having to read the Introduction for a class, and I’m pretty sure that’s the extent we covered Lyotard at all.

The Introduction is primarily focused on giving an explanation of what Lyotard means by “the postmodern condition,” and how we know we are living in it. There is something important and subtle here. The section is descriptive rather than prescriptive. Modern (liberal arts) academia tends to think in prescriptive terms. We’ll get to that later.

I guess I’ll now just pull some famous quotes and expound on them.

Science has always been in conflict with narratives.

I don’t think this is that controversial. He’s saying science is one narrative for how we arrive at knowledge. The narrative might be called the Enlightenment Values narrative. It’s based on empiricism and rational argument.

This narrative is so pervasive that we often forget it is a narrative. We usually equate science with knowledge, but these values didn’t always exist in the West. There is a substantial body of work from Descartes to Kant that had to make the case for rationality and empiricism as a foundation for knowledge. That’s the definition of a narrative.

The fact that science comes into conflict with other narratives should be readily obvious. There are science vs religion debates all the time to this day. Lyotard also points out another vital concept we often overlook. There are lots of institutions and political forces behind what we call science, and each of these has its own metanarrative that might come into conflict with forming knowledge.

I define postmodern as incredulity toward metanarratives. This incredulity is undoubtedly a product of progress in the sciences: but that progress in turn presupposes it.

This is a bit deeper than it looks, but only because I know the context of Lyotard’s writing. Taken with the first quote above, one might just think that he’s saying the progress of science has led to people questioning the metanarratives of their lives, like the religion they were brought up in.

Part of the reason Lyotard has chosen the term “postmodern” to describe this condition is because of the artistic movements known as postmodernism. The utter destruction of World War I and World War II brought a destabilization to people’s lives.

Technology created this destruction, and it was fueled by science. Not only did people question the traditions they were brought up in, but they began to question if science itself was good. Much of the postmodern art produced in the decades after WWII focused on highly disjointed narratives (Lost in the Funhouse), the horrors of war (Gravity’s Rainbow), involved utter chaos and randomness (Dadaism), or emphasized futility and meaninglessness (Waiting for Godot).

All these aspects overthrew narratives and traditions. They weren’t just radical because of the content, they often questioned whether we even knew what a novel or a play or a poem or a piece of music was. If we no longer knew what these longstanding artistic forms and narratives were, how could we trust any of the narratives that gave our life meaning?

And I’ll reiterate, there is a pretty direct link from the science that brought the destruction to this “postmodern condition” people found themselves in.

The rest of the Introduction gets pretty jargony.

Where, after the metanarratives, can legitimacy reside?

There is a danger that people will seize upon any stabilizing force once in this position. Authority figures can even ride this to power (we just watched this happen in the U.S.). They tell us stories that make sense and make us feel better, so we put them in power. This is an endless cycle, because once in power, they control the narrative.

How do we form truth and knowledge in such a society? That is the subject of Lyotard’s book and is not answered merely in the Introduction.

I’ll end today’s post by pointing out something very important. Lyotard seems to believe in truth and knowledge and science. He seems concerned by people’s rejection of these concepts due to the postmodern condition.

When people self-describe themselves as a postmodernist, they tend to mean they reject the notion of truth. They say that all we have are narratives, and each is equally valid. Maybe this is because Lyotard isn’t a postmodernist? He merely describes what is going on.

I think more likely it’s that this label has changed from descriptive to prescriptive. Current postmodernists think of the postmodern condition as being good. If science starts to dominate as a narrative, these people want to reject that. In some sense they see this as “liberation” from the “imperialist white capitalist patriarchy” that has dominated the West and caused so much suffering.

I’m very curious to see if these attitudes actually crop up in the writings of postmodernist philosophers or if the this view is some corruption of these thinkers.

What is an Expert?

I’ll tread carefully here, because we live in a strange time of questioning the motives and knowledge of experts to bolster every bizarre conspiracy theory under the sun. No one trusts any information anymore. It’s not even clear if trusting/doubting expert opinion is anti/hyper-intellectual. But that isn’t the subject of today’s topic.

I listen to quite a few podcasts, and several of them have made me think about expertise recently.

For example, Gary Taubes was on the Sam Harris podcast and both of them often get tarred with the “you don’t have a Ph.D. in whatever, so you’re an unknowledgeable/dangerous quack” brush. Also, Dan Carlin’s Hardcore History podcast is insanely detailed, but every ten minutes he reminds the audience “I’m not a historian …”

Many people who value the importance of expertise think that the degree (the Ph.D. in particular but maybe an MFA for arts stuff) is the be-all-end-all of the discussion. You have the Ph.D., then you’re an expert. If you don’t, then you’re not.

The argument I want to present is that if you believe this, you really should be willing to extend your definition of expertise to a wider group of people who have essentially done the equivalent work of one of these degrees.

Think of it this way. Person A goes to Subpar University, scrapes by with the minimal work, kind of hates it, and then teaches remedial classes at a Community College for a few years. Person B has a burning passion for the subject, studies all of the relevant literature, and continues to write about and develop novel ideas in the subject for decades. I’d be way more willing to trust Person B as an expert than Person A despite the degree differences.

Maybe I’ve already convinced you, and I need not go any further. Many of you are probably thinking, yeah, but there are parts to doing a degree that can’t be mimicked without the schooling. And others might be thinking, yeah, but Person B is merely theoretical. No one in the real world exists like Person B. We’ll address each of these points separately.

I think of a Ph.D. as having three parts. Phase 1 is demonstration of competence of the basics. This is often called the Qualifying or Preliminary Exam. Many students don’t fully understand the purpose of this phase while going through it. They think they must memorize and compute. They think of it as a test of basic knowledge.

At least in math and the hard sciences, this is not the case. It is almost a test of attitude. Do you know when you’re guessing? Do you know what you don’t know? Are you able to admit this or will you BS your way through something? Is the basic terminology internalized? You can pass Phase 1 with gaps in knowledge. You cannot pass Phase 1 if you don’t know where those gaps are.

Phase 2 is the accumulation of knowledge of the research done in your sub-sub-(sub-sub-sub)-field. This basically amounts to reading thousands of pages, sometimes from textbooks to get a historical view, but mostly from research papers. It also involves talking to lots of people engaged in similar, related, or practically the same problems as your thesis. You hear their opinions and intuitions about what is true and start to develop your own intuitions.

Phase 3 is the original contribution to the literature. In other words, you write the thesis. To get a feel for the difficulty and time commitment of each step, if you do a five year Ph.D., ideally Phase 1 would be done in around a year, Phase 2 is 2-4 years, and Phase 3 is around a year (there is overlap between phases).

I know a lot of people aren’t going to like what I’m about to say, but the expertise gained from a Ph.D. is almost entirely the familiarization with the current literature. It’s taking the time to read and understand everything being done in the field.

Phase 1 is basically about not wasting people’s time and money. If you’re going to not understand what you’re reading in Phase 2 and make careless mistakes in Phase 3, it’s best to weed those people out with Phase 1. But you aren’t gaining any expertise in Phase 1, because it’s all just the basics still.

One of the main reasons people don’t gain Ph.D.-level expertise without actually doing the degree is because being in such a program forces you to compress all that reading into a small time-frame (yes, reading for three years is short). It’s going to take someone doing it as a hobby two or three times longer, and even then, they’ll be tempted to just give up without the external motivation of the degree looming over them.

Also, without motivating thesis problem, you won’t have the narrow focus to make the reading and learning manageable. I know everyone tackles this in different ways, but here’s how it worked for me. I’d take a paper on a related topic, and I’d try to adapt the techniques and ideas to my problem. This forced me to really understand what made these techniques work, which often involved learning a bunch of stuff I wouldn’t have if I just read through it to see the results.

Before moving on, I’d like to add that upon completion of a Ph.D. you know pretty much nothing outside of your sub-sub-(sub-sub-sub)-field. It will take many years of continued teaching and researching and reading and publishing and talking to people to get any sense of your actual sub-field.

Are there people who complete the equivalent of the three listed phases without an actual degree?

I’ll start with the more controversial example of Gary Taubes. He got a physics undergrad degree and a masters in aerospace engineering. He then went into science journalism. He stumbled upon how complicated and shoddy the science of nutrition was, and started to research a book.

Five years later, he had read and analyzed pretty much every single nutrition study done. He interviewed six hundred doctors and researchers in the field. If this isn’t Phase 2 of a Ph.D., I don’t know what is. Most students won’t have gone this in-depth to learn the state of the field in an actual Ph.D. program.

Based on all of this, he then wrote a meticulously cited book Good Calories, Bad Calories. The bibliography is over 60 pages long. If this isn’t Phase 3 of a Ph.D., I don’t know what is. He’s continued to stay abreast of studies and has done at least one of his own in the past ten years. He certainly has more knowledge of the field than any fresh Ph.D.

Now you can disagree with his conclusions all you want. They are quite controversial (but lots of Ph.D. theses have controversial conclusions; this is partially how knowledge advances). Go find any place on the internet with a comments section that has run something about him and you’ll find people who write him off because “he got a physics degree so he’s not an expert on nutrition.” Are we really supposed to ignore 20 years of work done by a person just because it wasn’t done at a University and the previous 4 years of their life they got an unrelated degree? It’s a very bizarre sentiment.

A less controversial example is Dan Carlin. Listen to any one of his Hardcore History podcasts. He loves history, so he obsessively reads about it. Those podcasts are each an example of completing Phase 3 of the Ph.D. And he also clearly knows the literature as he constantly references hundreds of pieces of research an episode off the top of his head. What is a historian? Supposedly it’s someone who has a Ph.D. in history. But Dan has completed all the same Phases, it just wasn’t at a university.

(I say this is less controversial, because I think pretty much everyone considers Dan an expert on the topics he discusses except for himself. It’s a stunning display of humility. Those podcasts are the definition of having expertise on a subject.)

As a concluding remark/warning. There are a lot of cranks out there who try to pass themselves off as experts who really aren’t. It’s not easy to tell for most people, and so it’s definitely best to err on the side of the degree that went through the gatekeeper of a university when you’re not sure.

But also remember that Ph.D.’s are human too. There’s plenty of people like Person A in the example above. You can’t just believe a book someone wrote because that degree is listed after their name. They might have made honest mistakes. They might be conning you. Or, more likely, they might not have a good grasp on the current state of knowledge of the field they’re writing about.

What is an expert? To me, it is someone who has dedicated themselves with enough seriousness and professionalism to get through the phases listed above. This mostly happens with degree programs, but it also happens a lot in the real world, often because someone moves into a new career.

Thoughts on Barker’s Imajica

I believe I read a Clive Barker novel about fifteen years ago, but I have no idea what it was. A few years ago, I read some of his short stories, and this reinforced the conception I had of Barker as a horror writer, which isn’t really my thing. Still, Imajica came up on my radar for some reason, and I decided to give it a go.

Wow. I’m so glad I did. It’s going to be fairly difficult to describe anything about this book. It’s very weird, but in a wildly inventive and wonderful way. There are some gory images here and there, but I’d most certainly not classify it as horror. It’s more of a surrealist examination of spirituality? So kind of like The Holy Mountain.

I’ll try to set up the premise to give you an idea of the bizarre-ness, though, the whole point of the novel shifts by about 1/10 of the way through it. There’s Five Dominions. Earth, as we humans know it, is the Fifth Dominion. We’ve never seen these other magical places.

There’s a longtime conspiracy of people (I use this term lightly) making up a secret society to keep the Fifth Dominion separate from the other four. There is a way in though.

The novel begins with a man who is so in love with a woman, Judith, that he hires an assassin to kill her after she breaks up with him (obviously so she can’t be with anyone else). He has second thoughts and contacts Judith’s ex, Gentle, to stop the assassin. He succeeds. The assassin, Pie, is a being from one of these other dominions that doesn’t really have a gender. It becomes basically whatever it’s lover wants to see in it.

Pie seduces Gentle by appearing to be Judith. Gentle learns of what it did, and Pie takes Gentle into the other dominions. They gradually fall in love. Also, a billion other things are going on by this point, so don’t think that’s “really” the story. It’s about revelation, separation, unity, isolation, love, sex, power, God, redemption, finding meaning, culture, and on and on.

Don’t panic. It’s not done in a way that tries to be about everything and ends up being about nothing. This novel really tackles the big questions in a focused and metaphorical way. It just so happens that these big questions encompass all those other things.

Here’s some things I think the book does really well. There is a gigantic amount of information hidden to the reader: the conspiracy, how these other dominions run, the cultures there, the background on the conflicts, why the Fifth is separated, and so on.

Barker manages to slip this information to the reader in gradual and subtle doses over 600 pages or more. This means the novel stays story centric and engaging with almost no information dumps. It’s actually kind of brilliant how he does this. Often, you will hear things that make no sense. This causes you to reconcile your view of what’s going on with your existing theory. It’s only after you’ve done this many times that the full picture comes into focus.

Another thing I didn’t expect was how good the prose was. I expected genre horror writing full of stock prose: nothing bad but nothing great either. Instead, I found excellent execution of register shifting (often thought to be the most advanced and subtle techniques of prose style).

Register shifts refer to changing the type of language used to adapt to a situation. For example, if you’re hanging out with some friends, you might say, “‘Sup?” This is an informal register. If you’re at a job interview, you might say, “Hello. How are you doing? It is very good to meet you.” This is a formal register.

The thing that makes this so difficult in prose writing is that the context of scene must determine the proper register. When you first try to do this, it will probably be overdone, and this will change the voice. It must be done with enough subtlety so the voice remains consistent and only the register of the voice changes.

Most people will never notice if a writer has done this well. It is usually obvious when a writer doesn’t do it or overdoes it. We tend to say the writing fell “flat” in an absence of register shifts (a great term because there weren’t any up or down shifts in register).

The register tends to reflect the dominion we’re in. This is because as the dominions get closer to the First, the people get closer to God. The register shifts up to indicate the formality and ritualistic nature of religion. Take an early scene.

Gentle took off his heavy coat and laid it on the chair by the door, knowing when he returned it would be warm and covered with cat hairs. Klein was already in the living room, pouring wine. Always red.

This is quite low. There’s even a sentence fragment. The sentences are simple and to the point. The descriptors are common.

Now take a midway scene in a different dominion.

Like the theater districts of so many great cities across the Imajica, whether in Reconciled Dominions or in the Fifth, the neighborhood in which the Ipse stood had been a place of some notoriety in earlier times, when actors of both sexes had supplemented their wages with the old five-acter—hiring, retiring, seduction, conjunction, and remittance—all played hourly, night and day.

This single sentence is almost double the length of the entire three sentences above. The structure is quite complicated: subordinate clause, appositive, etc. This is an elevated register. The same sentence in a lower register would be “Whores could be found on the streets of the city in which the Ipse stood.” We could lower it even more or raise it to more formal levels than what was written. But it strikes a delicate balance of beautiful description in elevated voice.

I know it’s kind of mind-boggling to think that Barker did all this, but I noticed it early and then paid close attention. It is consistent throughout, which makes me think it is not some accident or coincidence.

Lastly, the symbolism is amazing. It draws on and reinterprets many famous Biblical stories. I can’t get into it, because I don’t want to give anything away if you haven’t read the book. It is some of the best of this type of writing I’ve seen. It isn’t so direct as to be cringe-worthy, and it is all done in an inventive re-imagining.

It’s kind of sad I didn’t read this during my Year of Giant Novels. It possibly would have been the Number 1 book of the year.

The Infinite Cycle of Gladwell’s David and Goliath

I recently finished reading Malcolm Gladwell’s David and Goliath: Underdogs, Misfits, and the Art of Battling Giants. The book is like most Gladwell books. It has a central thesis, and then interweaves studies and anecdotes to make the case. In this one, the thesis is fairly obvious: sometimes things we think of as disadvantages have hidden advantages and sometimes things we think of as advantages have hidden disadvantages.

The opening story makes the case from the Biblical story of David and Goliath. Read it for more details, but roughly he says that Goliath’s giant strength was a hidden disadvantage because it made him slow. David’s shepherding was a hidden advantage because it made him good with a sling. It looks like the underdog won that fight, but it was really Goliath who was at a disadvantage the whole time.

The main case I want to focus on is the chapter on education, since that is something I’ve talked a lot about here. The case he makes is both interesting and poses what I see as a big problem for the thesis. There is an infinite cycle of hidden advantages/disadvantages that makes it hard to tell if the apparent (dis)advantages are anything but a wash.

Gladwell tells the story of a girl who loves science. She does so well in school and is so motivated that she gets accepted to Brown University. Everyone thinks of an Ivy League education as being full of advantages. It’s hard to think of any way in which there would be a hidden disadvantage that wouldn’t be present in someplace like Small State College (sorry, I don’t remember what her actual “safety school” was).

It turns out that she ended up feeling like a complete inadequate failure despite being reasonably good. The people around her were so amazing that she got impostor syndrome and quit science. If she had gone to Small State College, she would have felt amazing, gotten a 4.0, and become a scientist like she wanted.

It turns out we have quite a bit of data on this subject, and this is a general trend. Gladwell then goes on to make just about the most compelling case against affirmative action I’ve ever heard. He points out that letting a minority into a college that they otherwise wouldn’t have gotten into is not an advantage. It’s a disadvantage. Instead of excelling at a smaller school and getting the degree they want, they’ll end up demoralized and quit.

At this point, I want to reiterate that this has nothing to do with actual ability. It is entirely a perception thing. Gladwell is not claiming the student can’t handle the work or some nonsense. The student might even end up an A student. But even the A students at these top schools quit STEM majors because they perceive themselves to be not good enough.

Gladwell implies that this hidden disadvantage is bad enough that the girl at Brown should have gone to Small State College. But if we take Gladwell’s thesis to heart, there’s an obvious hidden advantage within the hidden disadvantage. Girl at Brown was learning valuable lessons by coping with (perceived) failure that she wouldn’t have learned at Small State College.

It seems kind of insane to shelter yourself like this. Becoming good at something always means failing along the way. If girl at Brown had been a sheltered snowflake at Small State College and graduated with her 4.0 never being challenged, that seems like a hidden disadvantage within the hidden advantage of going to the “bad” school. The better plan is to go to the good school, feel like you suck at everything, and then have counselors to help students get over their perceived inadequacies.

As a thought experiment, would you rather have a surgeon who was a B student at the top med school in the country, constantly understanding their limitations, constantly challenged to get better, or the A student at nowhere college who was never challenged and now has an inflated sense of how good they are? The answer is really easy.

This gets us to the main issue I have with the thesis of the book. If every advantage has a hidden disadvantage and vice-versa, this creates an infinite cycle. We may as well throw up our hands and say the interactions of advantages and disadvantages is too complicated to ever tell if anyone is at a true (dis)advantage. I don’t think this is a fatal flaw for Gladwell’s thesis, but I do wish it had been addressed.

Five Predictions for a Trump Presidency

I thought I’d write this post so it’s on the record. Here’s five predictions for the Trump presidency. These are merely things he’s been telling us he will do. I hope I eat my words in four years.

The number of uninsured will skyrocket. 

He has said he will repeal the Affordable Care Act on Day 1 in office. He’s given us no indication as to his replacement except “competition, free markets, mumble, mumble …” This is in stark contrast to years of Republican policy. Republicans have run on the idea of personal responsibility. The ACA finally achieved this by mandating everyone to buy their own insurance.

Trump wants to repeal this. Now millions of people will be uninsured but still have health costs. These costs will shift to the public. As a side note, I showed why competition doesn’t work in this post. I also predict the cost of health insurance will skyrocket. We can’t be confused when this happens. It is very well understood.

If you are a freelancer or your employer doesn’t subsidize your insurance, I urge you to read the terms of whatever looks affordable very carefully after the repeal of the ACA. I predict the only affordable insurance will be junk. Don’t get duped.

There will be a global recession.

Trump doesn’t seem to understand America’s unique position in the world. He plans to add over $5 trillion to the debt. The Committee for a Responsible Federal Budget estimates Trump’s plan to raise the debt to over 105% of our GDP. This has a lot of vast consequences for a country. The self-proclaimed “King of Debt” has been able to leverage these risky scenarios in his personal business by trashing the business.

You can’t do this with a country. The high debt to GDP ratio will likely lead to more expensive loans and at least a minor debt spiral along the lines of Greece. The U.S. will enter a bad recession, and this will lead to a global recession as well.

Also, he plans to tariff the crap out of countries. Many historians argue that the Smoot-Hawley tariffs were the primary cause of the Great Depression. Whether that is true or not, you can decide, but we should learn from history. It is doubtful Trump knows anything about this to know if it is dangerous or not.

The price of goods will rise much faster than inflation.

Trump has proposed several ideas to bring manufacturing jobs back. It’s likely no jobs will come to the U.S. because of this, but that isn’t officially one of my predictions. He plans to incentivize companies to produce in the U.S. by making it very expensive to produce outside the U.S.

There’s basically only two ways this could go. They keep producing outside the U.S. (likely) in which case prices of these goods has to increase to make up for the tariffs. Or they return to the U.S. where labor is more expensive, and the price of the goods must increase to make up for the cost of labor.

A sub-prediction here is that many businesses will go under because of this. Once prices rise, they’ll sell less. If they don’t sell enough, they go bankrupt. I know Trump sees bankruptcy as a thing to be celebrated, but I’m not sure the people that want their manufacturing jobs back will be too happy with this one.

Middle and middle-upper income brackets will see a tax increase.

I think this one frustrates me the most. As far as I can tell, my in-laws voted for Trump solely on the fear-mongering tactic of Trump yelling “She’s going to raise your taxes.” Guess what? Hillary had a detailed plan, and it did not involve raising anyone’s taxes except the very top, top tiny percent.

On the other hand, my in-laws might be surprised when their personal exemptions disappear. Trump’s plan increases the standard deduction but removes all personal exemptions, and the  conservative-leaning Tax Foundation estimates about 7.8 million households in the $60K – $100K income range would see a slight increase in federal income tax.

Woops. I guess they should have looked up his actual proposal instead of listening to someone who’s made their living off of swindling people.

A nuclear weapon will be used.

Our culture has become a bit desensitized to this grave issue. We see images of nuclear mushroom clouds all the time from cartoons to movies. I think it bears taking a moment and contemplating just how terrible nuclear weapons are. I know this prediction sounds like hysteria, but hear me out.

Trump, more than any other prediction on this list, has repeatedly, almost at every single opportunity, shown complete disregard for the horrifying consequences of the use of nuclear weapons. He doesn’t understand why we can’t use them. He doesn’t understand why other countries can’t develop them. He doesn’t understand how current treaties deter the use and proliferation of them.

Trump also has very thin skin. A single tweet can send him into a rampage. This is a dangerous combination.

I’m also not going to be so bold as to say we will be the one to use the nuclear weapon. My prediction is merely that someone will. He has said we are renegotiating deals across the board. This will create global instability. The dangerous combination of treaties in flux and Trump waving the threat of nuclear weapons will lead someone to pull the trigger.

I see two likely scenarios. The first is that Trump tears up the Iran Nuclear Accord. Iran develops nuclear weapons in the interim, and they are the first to use them from a threat in the Middle East. The other is that Trump lets South Korea develop nuclear weapons, and the unstable situation in North Korea leads one side to be too worried.

Trump has this phrase: Peace through strength. Let’s ignore the fact that tearing up NATO and promoting nuclear proliferation actually weakens us. I say: Peace through stability.

Some extra non-predictions.

I’ll reiterate that all of the above has been readily available information for anyone who cared to look it up. They are predictions based on promises he ran on. If he doesn’t hold to his promises, they won’t happen. Here’s some things he said he’ll do that I don’t foresee happening.

He won’t build the wall. It’s a terrible idea. It’s expensive. It probably wouldn’t decrease the number of illegal immigrants by much (some models predict it will increase the number). I don’t think any of these facts will go into the decision not to build it. I just think he swindled his supporters by playing up a big image.

He will not implement massive deportation on the scale he claimed. Tearing millions of families apart would be a PR nightmare for him, and we all know Trump wants to be loved. His approval ratings would plummet when newspapers made the cover images children crying as their parents are forcibly taken in front of them.

As a final note, I have no idea what “Make America Great Again” could possibly mean. On basically every measure of greatness, America has never been greater (income inequality is worse, but Trump has no care for this). If we try to return to some past moment, it will, almost by definition, be worse.

Clifford’s Ethics of Belief

Last post I took as a starting point the fact that people should want to hold true beliefs. It turns out that W. K. Clifford (yes, of Clifford fame in mathematics) wrote a famous essay in 1876 on the ethical implications of this idea called The Ethics of Belief. In general, the essay argues that it is immoral to hold beliefs which cannot be verified through sufficient evidence. I won’t go into his epistemology because I think we have better foundations such as that presented in the previous post.

The argument essentially boils down to making a case based on an example (or thought-experiment if you will). I’ll give some modern day real examples to point out that his idea seems to be warranted. Let it be said that Clifford presents the example in much more poetic language (well worth reading in my opinion: full copy here). To prevent the post from going on too long, I’m going to just distill out the key points.

John is an immigration ship-owner (in the late 19th century). He knows his ship is old and not well-built. He knows it probably needs repairs from its many journeys. The key point of this setup is that there is enough evidence here to cast some legitimate doubt on its seaworthiness. Still his cognitive biases started flaring when thinking about the time and money it would take to do repairs.

John starts rationalizing away his fears. He knows that the ship had made the journey many times, so why suspect it wouldn’t make it this time? He has faith that God would protect those innocent people on the journey. He knows the repair people were overstating the problems just to scam him for money. And so on. He comes to be sure that the ship is safe for travel.

As presented above it looks like John had control over his biases and intentionally argued himself into a position that was easier and cheaper for himself at the possible cost of other people’s lives. This is not the case at all. We know that cognitive biases as above work without our knowledge (see the previous post). In this thought experiment John really truly believes he has a correct belief that the ship is seaworthy, and he does not know that he came to this belief through faulty means.

Everyone knows how this story ends. The ship sinks in a storm and lots of innocent people die. Now we have a difficult moral dilemma to unpack. Is John morally responsible for their deaths? Clifford argues that he is. He argues that it is a moral responsibility to rigorously examine available evidence to come to a belief that is most likely to be true.

Clifford then alters the scenario and allows the ship to continuously keep making journeys successfully and the faulty belief never causes harm. He argues that it is still immoral for John to hold a belief that would not be supported by rigorous examination of the evidence. This is because we have no idea when our faulty beliefs will cause harm. It is a moral responsibility for us to keep intentionally casting doubt on our held beliefs to seriously undergo a reexamination.

In the case that the ship continues to make safe journeys, it is actually doing good in helping impoverished people immigrate to make a better life (or at least we assume so to make a more striking case). Clifford argues that the act of not examining the belief is still immoral. We cannot judge whether or not an action is moral based on accidental consequences even if those consequences produce good for society. To rephrase yet again, the ethics of whether or not it is moral to hold a belief is not based on the truth or falseness of the belief but on whether or not you have sufficiently good reason to believe it is true. It is always immoral to believe something on faith regardless of the good it does.

This has the interesting consequence that it is more moral to hold a false belief on good evidence than a true belief on bad evidence. Even though the next example will take this relatively neutral post to a bit more inflammatory levels, I think it is important to see that the thought experiment of Clifford is not pure ivory tower speculation. There are real people who are genuinely good people attempting to good in the world but whose false beliefs thwart them into doing some truly terrible things.

In 2008, an 11 year old girl named Madeline Neumann collapsed to the floor. She had a treatable form of diabetes. Her parents had plenty of time to go seek medical help and save her, but instead they prayed. They believed that pray would cure her. They watched their daughter die. This is not some random isolated incident. These types of deaths happen all the time and for good reason. If you truly believe that prayer works, then this is how you should behave. Madeline’s parents truly believed they were helping their daughter.

If you believe prayer works, but you wouldn’t behave in this way, then you need to take a serious look at your belief that it works. Clifford would say that you have just as much moral guilt as the parents of Madeline. The belief may never cause real harm for you, but the random accidental consequences of a belief are not how we judge whether or not holding the belief is moral. If you wouldn’t behave as Madeline’s parents, then you probably don’t truly believe prayer works, but you just haven’t examined it close enough to overcome the societal pressure of whatever community you belong to.

Clifford himself sums up nicely:

To sum up: it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.

…”But,” says one, “I am a busy man; I have no time for the long course of study which would be necessary to make me in any degree a competent judge of certain questions, or even able to understand the nature of the arguments.”

Then he should have no time to believe.

Bayes’ Theorem and Bias

I wonder how many times I’ll say this. This will be my last Bayes’ theorem post. At this point a careful reader should be able to extract most of the following post from the past few, but it is definitely worth spelling out in detail here. We’ve been covering how academics have used Bayes’ theorem in their work. It is also important to see how Bayes’ theorem could be useful to you in your own life.

For this post I’m going to take as my starting point that it is better in the long run for you to believe true things rather than false things. Since this is an academic blog and most academics are seeking truth in general, they hold to some sort of rational or skeptic philosophy. Whole books have been written defending why this position will improve society, but wanting to believe true things shouldn’t be that controversial of a position.

Honestly, to people who haven’t spent a lot of time learning about bias, it is probably impossible to overestimate how important a role it plays in making decisions. Lots of well-educated, logical, careful people can look at both sides of the evidence of something and honestly think they are making an objective decision about what is true based on the evidence, but in reality they are just reconfirming a belief they made for totally irrational reasons.

We’ve all seen this type of picture before:

Even though you know the following things:

1. What the optical illusion (i.e. bias) is.
2. How and why it works.
3. The truth, which is easily verified using a ruler, that the lines are the same length.

This knowledge does not give you the power to overcome the illusion/bias. You will still continue to see the lines as different lengths. If bias can do this for a sense as objective as sight, think about how easily tricked you can be if you go off of intuition or feelings.

This exercise makes us confront a startling conclusion. In order to form a true belief, we must use the conclusion that looks and feels wrong. We must trust the fact we came to through the verifiably more objective means. This is true of your opinions/beliefs as well. You probably have false beliefs that will still look and feel true to you even once you’ve changed your mind about them. You need to trust the evidence and arguments.

A Bayesian analysis of this example might run as follows. You have the belief that the lines are different lengths from looking at it. In fact, you could reasonably set the prior probability that this belief is true pretty high because although your eyesight has been wrong in the past, you estimate that around 99% it wouldn’t make such an obvious and large error. The key piece of evidence you acquired is when you measured this with a ruler. You find they are the same length. This evidence is so strong in comparison with your confidence in your eyesight that it vastly outweighs the prior probability and you confidently conclude your first belief was false.

You probably came to many of the beliefs you have early on in life. Maybe your parents held them. Maybe your childhood friends influenced you. Maybe you saw some program on TV that got you thinking a certain way. In any case, all of these are bad reasons to believe something. Now you’ve grown up, and you think that you’ve re-evaluated these beliefs and can justify them. In reality, you’ve probably just reconfirmed them through bias.

Once you’ve taken a position on something, your brain has an almost incomprehensible number of tricks it can do in order to prevent you from changing your mind. This is called bias and you will be totally unaware of it happening. The rational position is to recognize this happens and try to remove it as much as possible in order to change an untrue belief to a true belief. Trust me. If done right, this will be a painful process. But if you want true beliefs, it must be done and you must be willing to give up your most cherished beliefs since childhood even if it means temporary social ostracization (spell check tells me this isn’t a real word, but it feels right).

What this tells us is that if we really want true beliefs we need to periodically revisit our beliefs and do a thorough sifting of the evidence in as objective a way as possible to see if our beliefs have a chance at being true.

Since there are literally thousands of cognitive biases we know about, I can’t go through all the ones you might encounter, but here are a few. One is confirmation bias. When you look at evidence for and against a belief you will tend to remember only the evidence that confirmed your already held belief (even if the evidence against is exponentially bigger!). It is difficult to reiterate this enough, but you will not consciously throw the evidence out. You will not be aware that this happened to you. You will feel as if you evenly weighed both sides of the evidence.

One of my favorite biases that seems to receive less attention is what I call the many-against-one bias (I’m not sure if it has a real name). Suppose you have three solid pieces of evidence for your belief. Suppose the counter-evidence is much better and there are seven solid pieces of it. When you look through this, what you will tend to do is look at the first piece of evidence and think, “Well, my side has these three pieces of evidence and so although that looks good it isn’t as strong as my side.” Then you move on to piece of counter-evidence two and do the same thing.

All of a sudden you’ve dismissed tons of good evidence that when taken together would convince you to change your mind, but since it was evaluated separately in a many-against-one (hence the name!) fashion you’ve kept your old opinion. Since you can’t read all the counter-evidence simultaneously, and you probably have your own personal evidence well-reinforced, it is extremely difficult to avoid this fallacy.

And on and on and on and on and on … it goes. Seriously. This should not be thought of as “bad” or something. Just a fact. It will happen, and you will not be aware of it. If you just simply look at both sides of the argument you will 99.99% of the time just come out believing the same thing. You need to take very careful precautions to avoid this.

Enter Bayes’ theorem. Do not misconstrue what I’m saying here as this being a totally objective way to come to the truth. This is just one way that you could try as a starting point. Here’s how it works. You take a claim/belief which we call B. Now you look at the best arguments and evidence for the claim you can find. You write each one down, clearly numbered, with lots of space between. Now you go find all the best counterarguments and evidence you can find to those claims and write those down next to the original ones. Now do the exact same thing with the best arguments/evidence you can find against the claim/belief.

One at a time you totally bracket off all your feelings and thoughts about the total question at hand. Just look at evidence 1 together with its counter-evidence. Supposing the claim is true, what are the chances that you would have this evidence? This is part of your P(E|B) calculation. Don’t think about how it will affect the total P(B|E) calculation. Stay detached. Find people who have the opposite opinion as you and try to convince them of your number just on this one tiny thing. If you can’t, maybe you aren’t weighting it right.

Go through every piece of evidence this way weighing it totally on its own merits and not in relation to the whole argument. Having everything written down ahead of time will help you overcome confirmation bias. Evaluating the probabilities in this way one at a time will help you overcome the many-against-one bias (you’ll probably physically feel this bias when you do it this way as you start to think, “But it isn’t that good in relation to this.”) This will also overcome numerous other biases, especially ones involving poor intuitions about probability. But do not think you’ve somehow overcome them all, because you won’t.

One of the hardest steps is then to combine your calculations into Bayes’ theorem. You should think about whether or not pieces of evidence are truly independent if you want a proper calculation. But overall you’ll get the probability that your belief is true given the evidence, and it will probably be pretty shocking. Maybe you were super confident (99.99% or something) that there was no real reason to doubt it, but you find out it is more like 55%.

Maybe something you believe only has a 5% chance of being true and you’ve just never weighed the evidence in this objective a way. You need to either update what you think is true, or at very least if it still seems to be able to go either way, be much more honest about how sure you are. I hope more people start doing this as I am one of those people that think the world would be a much better place if people stopped confidently clinging to their beliefs taught to them from childhood.

Changing your mind should not have the cultural stigma it does. Currently people who change their minds are perceived as weak and not knowing what they are talking about. At very least, they give the impression that since their opinion changes it shouldn’t be taken seriously as it might change again soon. What needs to happen is that we come to recognize the ability to change ones beliefs as an honest endeavor, having academic integrity, and is something that someone who really seeks to hold true beliefs does frequently. These people should be held up as models and not the other way around.