Critical Postmodern Readings, Part 2: Finishing Lyotard

Last time we looked at the introduction to Lyotard’s The Postmodern Condition: A Report on Knowledge. That introduction already contained much of what gets fleshed out in the rest of the short book, so I’m going to mostly summarize stuff until we hit anything that requires serious critical thought.

The first chapter goes into how computers have changed the way we view knowledge. It was probably an excellent insight that required argument at the time. Now it’s obvious to everyone. Humans used to gain knowledge by reading books and talking to each other. It was a somewhat qualitative experience. The nature of knowledge has shifted with (big) data and machine learning. It’s very quantitative. It’s also a commodity to be bought and sold (think Facebook/Google).

It is a little creepy to understand Lyotard’s prescience. He basically predicts that multinational corporations will have the money to buy this data, and owning the data gives them real-world power. He predicts knowledge “circulation” in a similar way to money circulation.  Here’s a part of the prediction:

The reopening of the world market, a return to vigorous economic competition, the breakdown of the hegemony of American capitalism, the decline of the socialist alternative, a probable opening of the Chinese markets …

Other than the decline of the socialist alternative (which seems to have had a recent surge), Lyotard has a perfect prediction of how computerization of knowledge actually affected the world in the 40 years since he wrote this.

Chapter two reiterates the idea that scientific knowledge (i.e. the type discussed above) is different than, and in conflict with, “narrative” knowledge. There is also a legitimation “problem” in science. The community as a whole must choose gatekeepers seen as legitimate who decide what counts as scientific knowledge.

I’ve written about why I don’t see this as a problem like Lyotard does, but I’ll concede the point that there is a legitimation that happens, and it could be a problem if those gatekeepers change the narrative to influence what is thought of as true. There are even known instances of political biases making their way into schools of scientific thought (see my review of Galileo’s Middle Finger by Alice Dreger).

Next Lyotard sets up the framework for thinking about this. He uses Wittgenstein’s “language game” concept. The rules of the game can never legitmate themselves. Even small modifications of the rules can greatly alter meaning. And lastly (I think this is where he differs from Wittgenstein), each speech act is an attempt to alter the rules. Since agreeing upon the current set of rules is a social contract, it is necessary to understand the “nature of social bonds.”

This part gets a little weird to me. He claims that classically society has been seen either as a unified whole or divided in two. The rules of the language games in a unified whole follow standard entropy (they get more complicated and chaotic and degenerate). The divided in two conception is classic Marxism (bourgeoisie/proletariat).

Even if it gets a bit on the mumbo-jumbo side through this part, I think his main point is summarized by this quote:

For it is impossible to know what the state of knowledge is—in other words, the problems its development and distribution are facing today—without knowing something of the society within which it is situated.

This doesn’t seem that controversial to me considering I’ve already admitted that certain powers can control the language and flow of knowledge. Being as generous as possible here, I think he’s just saying we have to know how many of these powers there are and who has the power and who legitimated that power before we can truly understand who’s forming these narratives and why.

In the postmodern world, we have a ton of different institutions all competing for their metanarrative to be heard. Society is more fractured than just the two divisions of the modern world. But each of these institutions also has a set of rules for their language games that constrains them.  For example, the language of prayer has a different set of rules from an academic discussion at a university.

Chapters 7-9 seem to me to be where the most confusion on both the part of Lyotard and the reader can occur. He dives into the concept of narrative truth and scientific truth. You can already feel Lyotard try to position scientific truth to be less valuable than it is and narrative truth more valuable.

Lyotard brings up the classic objections to verification and falsification (namely a variant on Hume’s Problem of Induction). How does one prove ones proof and evidence of a theory is true? How does one know the laws of nature are consistent across time and space? How can one say that a (scientific) theory is true merely because it cannot be falsified?

These were much more powerful objections in Lyotard’s time, but much of science now takes a Bayesian epistemology (even if they don’t admit to this terminology). We believe what is most probable, and we’re open to changing our minds if the evidence leads in that direction. I addressed this more fully a few years ago in my post: Does Bayesian Epistemology Suffer Foundational Problems?

… drawing a parallel between science and nonscientific (narrative) knowledge helps us understand, or at least sense, that the former’s existence is no more—and no less—necessary than the latter’s.

These sorts of statements are where things get tricky for me. I buy the argument that narrative knowledge is important. One can read James Baldwin and gain knowledge and empathy of a gay black man’s perspective that changes your life and the way you see the world. Or maybe you read Butler’s performative theory of gender and suddenly understand your own gender expression in a new way. Both of these types of narrative knowledge could even be argued to be a “necessary” and vital part of humanity.

I also agree science is a separate type of knowledge, but I also see science as clearly more necessary than narrative knowledge. If we lost all of James Baldwin’s writings tomorrow, it would be a tragedy. If we lost the polio vaccine tomorrow, it would be potentially catastrophic.

It’s too easy to philosophize science into this abstract pursuit and forget just how many aspects of your life it touches (your computer, the electricity in your house, the way you cook, the way you get your food, the way you clean yourself). Probably 80% of the developed world would literally die off in a few months if scientific knowledge disappeared.

I’ll reiterate that Lyotard thinks science is vastly important. He is in no way saying the problems of science are crippling. The above quote is more in raising narrative knowledge to the same importance of science than the devaluing of science (Lyotard might point to the disastrous consequences that happened as a result of convincing a nation of the narrative that the Aryan race is superior). For example, he says:

Today the problem of legitimation is no longer considered a failing of the language game of science. It would be more accurate to say that it has itself been legitimated as a problem, that is, as a heuristic driving force.

Anyway, getting back to the main point. Lyotard points out that problems of legitimating knowledge is essentially modern, and though we should be aware of the difficulties, we shouldn’t be too concerned with it. The postmodern problem is the grand delegitimation of various narratives (and one can’t help but hear Trump yell “Fake News” while reading this section of Lyotard).

Lyotard spends several sections developing a theory of how humans do science, and he develops the language of “performativity.” It all seems pretty accurate to me, and not really worth commenting on (i.e. it’s just a description). He goes into the issues Godel’s Incompleteness Theorem caused for positivists. He talks about the Bourbaki group. He talks about the seeming paradox of having to look for counterexamples while simultaneously trying to prove the statement to be true.

I’d say the most surprising thing is that he gets this stuff right. You often hear about postmodernists hijacking math/science to make their mumbo-jumbo sound more rigorous. He brings up Brownian motion and modeling discontinuous phenomena with differentiable functions to ease analysis and how the Koch curve has a non-whole number dimension. These were all explained without error and without claiming they imply things they don’t imply.

Lyotard wants to call these unintuitive and bizarre narratives about the world that come from weird scientific and mathematical facts “postmodern science.” Maybe it’s because we’ve had over forty more years to digest this, but I say: why bother? To me, this is the power of science. The best summary I can come up with is this:

Narrative knowledge must be convincing as a narrative; science is convincing despite the unconvincing narrative it suggests (think of the EPR paradox in quantum mechanics or even the germ theory of disease when it was first suggested).

I know I riffed a bit harder on the science stuff than a graduate seminar on the book would. Overall, I thought this was an excellent read. It seems more relevant now than when it was written, because it cautions about the dangers of powerful organizations buying a bunch of data and using that to craft narratives we want to hear while deligitimating narratives that hurt them (but which might be true).

We know now that this shouldn’t be a futuristic, dystopian fear (as it was in Lyotard’s time). It’s really happening with targeted advertising and the rise of government propaganda and illegitimate news sources propagating our social media feeds. We believe what the people with money want us to believe, and it’s impossible to free ourselves from it until we understand the situation with the same level of clarity that Lyotard did.

Bayes’ Theorem 2: Bayesian Epistemology

Last time I ended by saying we’d look at an example from the philosophy of math. We’ll get to that later, but I realized that even though we did an example of applying Bayes’ theorem I gave no feel for what it might mean to “be a Bayesian.”

The word Bayesian has been stuck in front of basically any branch of study you can think of (just look at the wikipedia disambiguation page on Bayesian). The term basically does the same modification to any field of study and it just means that you recast your arguments in a way that allows you to use Bayes’ theorem to make inferences.

Today’s post will attempt to show what this means by recasting the scientific method in terms of Bayesian inference. I’ve been told that the philosopher of science Ian Hacking was the first to do this, but I don’t have a reference and haven’t read his stuff to know if this post will match how he uses the term.

Let’s just recap what the scientific method is briefly. Well, this will depend on who you ask, but for our purposes let’s just say it is the following. You form a hypothesis. This hypothesis allows you to make predictions. You design a carefully controlled experiment to see whether or not those predictions are valid. The experiment gives you evidence for or against your hypothesis. Based on this evidence you decide to accept or reject the hypothesis.

If we want to apply Bayesian inference to the scientific method, then we should re-interpret the example from last time in terms of “prior knowledge” and “evidence.” Recall that we had a test for a disease that was 99% accurate, but we also knew that only 1% of the population had the disease. You got tested and came up positive for the disease, and then Bayesian probability told us that there was only a 50% chance that you actually had the disease.

Again, since I only mean to suggest a rough idea by what is meant by this term “Bayesian” I’ll ignore some of the subtleties with whether proper scientific method requires a null hypothesis and whether you test for or against the predictions, etc and just focus on framing this as easily as possible.

In the example last time we’ll say that our hypothesis is that you have the disease and our experiment is to do this test. We are going into the experiment with some prior scientific knowledge. Namely, how often our experiment gives us the wrong answer and how many people have the disease. In other words, before running any test we have 99% confidence that our hypothesis is wrong.

Now we run the experiment to gather evidence. The evidence is a single instance of the test telling us that you have the disease. Bayes’ theorem tells us that we can only be 50% confident that the hypothesis is correct. Using Bayesian methods, I would hope that any scientist would say that the experiment was inconclusive.

Let’s consider a modified experiment. It consists of doing the disease test twice. After that first positive, all of a sudden all you get a negative. Bayes’ theorem gives us 99% confidence that you don’t have the disease and we could with scientific certainty (above 95% is the typical scientific cut-off) reject the hypothesis. This happens because the chance of you having the disease is so low and the chance of that negative result being wrong is so low it totally outweighs that positive result. Bayes’ theorem tells us that there is a 99% chance that the positive was a false positive (N.B. this is because of using two pieces of evidence from our experiment and not because false positives only happen 1% of the time).

Let’s consider getting a different result from our experiment. If we got both tests to come up positive, then Bayes’ theorem tells us that the probability of actually having the disease is 99%. So we can say that the hypothesis is true. There is just no way that the test came up with 2 false positives when there is such a small chance of a false positive.

Here’s the moral of all of this. Bayesian inference gives a mathematically precise way to make sense of the following phrase which is central to the scientific method: The more extraordinary the hypothesis (re: hypotheses that are counter to prior scientific knowledge) the more extraordinary the evidence must be.

Do you see how Bayesian inference does this? We started with the hugely extraordinary hypothesis that you had the disease despite the fact that we could go into the experiment with 99% certainty that this was incorrect. So we needed extremely good evidence in order to affirm the hypothesis. In the first experiment our evidence was testing positive for the disease. This might seem like good evidence considering the 99% confidence we can have in such a test, but our evidence had to overcome the huge obstacle of an extraordinary hypothesis.

Bayes’ theorem then told us that that evidence just wasn’t good enough for that hypothesis. So the Bayesian interpretation of the scientific method says we should look at how confident we are in the various pieces of prior scientific knowledge that confirm and reject our hypothesis as is. Then we do an experiment and get some evidence. We plug all that into Bayes’ theorem and see whether or not that evidence was good enough to have a high level of confidence in either accepting or rejecting the hypothesis.