Critical Postmodern Readings, Part 2: Finishing Lyotard

Last time we looked at the introduction to Lyotard’s The Postmodern Condition: A Report on Knowledge. That introduction already contained much of what gets fleshed out in the rest of the short book, so I’m going to mostly summarize stuff until we hit anything that requires serious critical thought.

The first chapter goes into how computers have changed the way we view knowledge. It was probably an excellent insight that required argument at the time. Now it’s obvious to everyone. Humans used to gain knowledge by reading books and talking to each other. It was a somewhat qualitative experience. The nature of knowledge has shifted with (big) data and machine learning. It’s very quantitative. It’s also a commodity to be bought and sold (think Facebook/Google).

It is a little creepy to understand Lyotard’s prescience. He basically predicts that multinational corporations will have the money to buy this data, and owning the data gives them real-world power. He predicts knowledge “circulation” in a similar way to money circulation.  Here’s a part of the prediction:

The reopening of the world market, a return to vigorous economic competition, the breakdown of the hegemony of American capitalism, the decline of the socialist alternative, a probable opening of the Chinese markets …

Other than the decline of the socialist alternative (which seems to have had a recent surge), Lyotard has a perfect prediction of how computerization of knowledge actually affected the world in the 40 years since he wrote this.

Chapter two reiterates the idea that scientific knowledge (i.e. the type discussed above) is different than, and in conflict with, “narrative” knowledge.

There is also a legitimation “problem” in science. The community as a whole must choose gatekeepers seen as legitimate who decide what counts as scientific knowledge.

I’ve written about why I don’t see this as a problem like Lyotard does, but I’ll concede the point that there is a legitimation that happens, and it could be a problem if those gatekeepers change the narrative to influence what is thought of as true. There are even known instances of political biases making their way into schools of scientific thought (see my review of Galileo’s Middle Finger by Alice Dreger).

Next Lyotard sets up the framework for thinking about this. He uses Wittgenstein’s “language game” concept. The rules of the game can never legitmate themselves. Even small modifications of the rules can greatly alter meaning. And lastly (I think this is where he differs from Wittgenstein), each speech act is an attempt to alter the rules. Since agreeing upon the current set of rules is a social contract, it is necessary to understand the “nature of social bonds.”

This part gets a little weird to me. He claims that classically society has been seen either as a unified whole or divided in two. The rules of the language games in a unified whole follow standard entropy (they get more complicated and chaotic and degenerate). The divided in two conception is classic Marxism (bourgeoisie/proletariat).

Even if it gets a bit on the mumbo-jumbo side through this part, I think his main point is summarized by this quote:

For it is impossible to know what the state of knowledge is—in other words, the problems its development and distribution are facing today—without knowing something of the society within which it is situated.

This doesn’t seem that controversial to me considering I’ve already admitted that certain powers can control the language and flow of knowledge. Being as generous as possible here, I think he’s just saying we have to know how many of these powers there are and who has the power and who legitimated that power before we can truly understand who’s forming these narratives and why.

In the postmodern world, we have a ton of different institutions all competing for their metanarrative to be heard.

Society is more fractured than just the two divisions of the modern world. But each of these institutions also has a set of rules for their language games that constrains them.  For example, the language of prayer has a different set of rules from an academic discussion at a university.

Chapters 7-9 seem to me to be where the most confusion on both the part of Lyotard and the reader can occur. He dives into the concept of narrative truth and scientific truth. You can already feel Lyotard try to position scientific truth to be less valuable than it is and narrative truth more valuable.

flight sky earth space
Photo by Pixabay on Pexels.com

Lyotard brings up the classic objections to verification and falsification (namely a variant on Hume’s Problem of Induction). How does one prove ones proof and evidence of a theory is true? How does one know the laws of nature are consistent across time and space? How can one say that a (scientific) theory is true merely because it cannot be falsified?

These were much more powerful objections in Lyotard’s time, but much of science now takes a Bayesian epistemology (even if they don’t admit to this terminology). We believe what is most probable, and we’re open to changing our minds if the evidence leads in that direction. I addressed this more fully a few years ago in my post: Does Bayesian Epistemology Suffer Foundational Problems?

… drawing a parallel between science and nonscientific (narrative) knowledge helps us understand, or at least sense, that the former’s existence is no more—and no less—necessary than the latter’s.

These sorts of statements are where things get tricky for me. I buy the argument that narrative knowledge is important. One can read James Baldwin and gain knowledge and empathy of a gay black man’s perspective that changes your life and the way you see the world.

Or maybe you read Butler’s performative theory of gender and suddenly understand your own gender expression in a new way. Both of these types of narrative knowledge could even be argued to be a “necessary” and vital part of humanity.

I also agree science is a separate type of knowledge, but I also see science as clearly more necessary than narrative knowledge.

If we lost all of James Baldwin’s writings tomorrow, it would be a tragedy. If we lost the polio vaccine tomorrow, it would be potentially catastrophic.

It’s too easy to philosophize science into this abstract pursuit and forget just how many aspects of your life it touches (your computer, the electricity in your house, the way you cook, the way you get your food, the way you clean yourself). Probably 80% of the developed world would literally die off in a few months if scientific knowledge disappeared.

I’ll reiterate that Lyotard thinks science is vastly important. He is in no way saying the problems of science are crippling. The above quote is more in raising narrative knowledge to the same importance of science than the devaluing of science (Lyotard might point to the disastrous consequences that happened as a result of convincing a nation of the narrative that the Aryan race is superior). For example, he says:

Today the problem of legitimation is no longer considered a failing of the language game of science. It would be more accurate to say that it has itself been legitimated as a problem, that is, as a heuristic driving force.

Anyway, getting back to the main point. Lyotard points out that problems of legitimating knowledge is essentially modern, and though we should be aware of the difficulties, we shouldn’t be too concerned with it. The postmodern problem is the grand delegitimation of various narratives (and one can’t help but hear Trump yell “Fake News” while reading this section of Lyotard).

Lyotard spends several sections developing a theory of how humans do science, and he develops the language of “performativity.”

It all seems pretty accurate to me, and not really worth commenting on (i.e. it’s just a description). He goes into the issues Godel’s Incompleteness Theorem caused for positivists. He talks about the Bourbaki group. He talks about the seeming paradox of having to look for counterexamples while simultaneously trying to prove the statement to be true.

I’d say the most surprising thing is that he gets this stuff right. You often hear about postmodernists hijacking math/science to make their mumbo-jumbo sound more rigorous. He brings up Brownian motion and modeling discontinuous phenomena with differentiable functions to ease analysis and how the Koch curve has a non-whole number dimension. These were all explained without error and without claiming they imply things they don’t imply.

Lyotard wants to call these unintuitive and bizarre narratives about the world that come from weird scientific and mathematical facts “postmodern science.” Maybe it’s because we’ve had over forty more years to digest this, but I say: why bother? To me, this is the power of science. The best summary I can come up with is this:

Narrative knowledge must be convincing as a narrative; science is convincing despite the unconvincing narrative it suggests (think of the EPR paradox in quantum mechanics or even the germ theory of disease when it was first suggested).

I know I riffed a bit harder on the science stuff than a graduate seminar on the book would. Overall, I thought this was an excellent read. It seems more relevant now than when it was written, because it cautions about the dangers of powerful organizations buying a bunch of data and using that to craft narratives we want to hear while deligitimating narratives that hurt them (but which might be true).

We know now that this shouldn’t be a futuristic, dystopian fear (as it was in Lyotard’s time). It’s really happening with targeted advertising and the rise of government propaganda and illegitimate news sources propagating our social media feeds. We believe what the people with money want us to believe, and it’s impossible to free ourselves from it until we understand the situation with the same level of clarity that Lyotard did.

Advertisements

Critical Postmodern Readings, Part 1: Lyotard

I’m over nine years into this blog, so I think most readers know my opinions and worldview on many issues in philosophy. I roughly subscribe to a Bayesian epistemology, and in practical terms this amounts to something like being a rational humanist and skeptic.

I believe there is an objective world and science can get at it, sometimes, but we also have embodied minds subject to major flaws, and so we can’t experience that world directly. Also, with near 100% probability, we experience many aspects in a fundamentally different way than it “actually” exists. This puts me somewhat in line with postmodernists.

I believe there are valid and invalid ways to interpret art. This puts me in stark contrast to postmodernists. Postmodernism, as a school of thought, seems to have made a major comeback in academic circles. I’ve also written about the dangers posed by these types of ideas. For more information, search “philosophy” on the sidebar. These opinions have been fleshed out over the course of tens of thousands of words.

I first read famous postmodernists and proto-postmodernists like Baudrillard, Foucault, Lyotard, Derrida, Hegel, and so on as an undergrad (i.e. before this blog even existed). At that time, I had none of the worldview above. I basically read those philosophers with the reaction: “Whoa, dude, that’s deep.” I went along with the other students, pretending to understand the profound thoughts of continental philosophy.

I’ve never returned to them, because I didn’t think they were relevant anymore.

I kind of thought we were past the idea of “post-truth.”

Now I’m not so sure. This whole intro is basically a way to say that I want to try to tackle some of these texts with a more critical approach and with the added knowledge and experience I’ve gained.

I know this will ruffle a lot of feathers. Part of postmodernists “thing” is to dismiss any criticism as “you’re not an expert, so you just don’t understand it.” That’s fine. I’m going to make an honest effort, though, and if you love this stuff and think I’m misunderstanding, let me know. I’m into learning.

Today we’ll tackle Jean-François Lyotard’s The Postmodern Condition: A Report on Knowledge. This is arguably the most important work in the subject, and is often cited as the work that defined “postmodernism.” Since I’ve already wasted a bunch of space with the setup, we’ll only cover the Introduction for now. I recall having to read the Introduction for a class, and I’m pretty sure that’s the extent we covered Lyotard at all.

The Introduction is primarily focused on giving an explanation of what Lyotard means by “the postmodern condition,” and how we know we are living in it. There is something important and subtle here. The section is descriptive rather than prescriptive. Modern (liberal arts) academia tends to think in prescriptive terms. We’ll get to that later.

I guess I’ll now just pull some famous quotes and expound on them.

Science has always been in conflict with narratives.

I don’t think this is that controversial.

He’s saying science is one narrative for how we arrive at knowledge.

The narrative might be called the Enlightenment Values narrative. It’s based on empiricism and rational argument.

This narrative is so pervasive that we often forget it is a narrative. We usually equate science with knowledge, but these values didn’t always exist in the West. There is a substantial body of work from Descartes to Kant that had to make the case for rationality and empiricism as a foundation for knowledge. That’s the definition of a narrative.

The fact that science comes into conflict with other narratives should be readily obvious. There are science vs religion debates all the time to this day. Lyotard also points out another vital concept we often overlook. There are lots of institutions and political forces behind what we call science, and each of these has its own metanarrative that might come into conflict with forming knowledge.

I define postmodern as incredulity toward metanarratives. This incredulity is undoubtedly a product of progress in the sciences: but that progress in turn presupposes it.

This is a bit deeper than it looks, but only because I know the context of Lyotard’s writing. Taken with the first quote above, one might just think that he’s saying the progress of science has led to people questioning the metanarratives of their lives, like the religion they were brought up in.

Part of the reason Lyotard has chosen the term “postmodern” to describe this condition is because of the artistic movements known as postmodernism. The utter destruction of World War I and World War II brought a destabilization to people’s lives.

Technology created this destruction, and it was fueled by science.

grayscale photo of explosion on the beach
Photo by Pixabay on Pexels.com

Not only did people question the traditions they were brought up in, but they began to question if science itself was good. Much of the postmodern art produced in the decades after WWII focused on highly disjointed narratives (Lost in the Funhouse), the horrors of war (Gravity’s Rainbow), involved utter chaos and randomness (Dadaism), or emphasized futility and meaninglessness (Waiting for Godot).

All these aspects overthrew narratives and traditions. They weren’t just radical because of the content, they often questioned whether we even knew what a novel or a play or a poem or a piece of music was. If we no longer knew what these longstanding artistic forms and narratives were, how could we trust any of the narratives that gave our life meaning?

And I’ll reiterate, there is a pretty direct link from the science that brought the destruction to this “postmodern condition” people found themselves in.

The rest of the Introduction gets pretty jargony.

Where, after the metanarratives, can legitimacy reside?

There is a danger that people will seize upon any stabilizing force once in this position. Authority figures can even ride this to power (we just watched this happen in the U.S.). They tell us stories that make sense and make us feel better, so we put them in power. This is an endless cycle, because once in power, they control the narrative.

How do we form truth and knowledge in such a society? That is the subject of Lyotard’s book and is not answered merely in the Introduction.

I’ll end today’s post by pointing out something very important. Lyotard seems to believe in truth and knowledge and science. He seems concerned by people’s rejection of these concepts due to the postmodern condition.

When people self-describe themselves as a postmodernist, they tend to mean they reject the notion of truth. They say that all we have are narratives, and each is equally valid. Maybe this is because Lyotard isn’t a postmodernist? He merely describes what is going on.

I think more likely it’s that this label has changed from descriptive to prescriptive. Current postmodernists think of the postmodern condition as being good. If science starts to dominate as a narrative, these people want to reject that. In some sense they see this as “liberation” from the “imperialist white capitalist patriarchy” that has dominated the West and caused so much suffering.

I’m very curious to see if these attitudes actually crop up in the writings of postmodernist philosophers or if the this view is some corruption of these thinkers.

The Ethics of True Knowledge

This post will probably be a mess. I listen to lots of podcasts while running and exercising. There was a strange confluence of topics that seemed to hit all at once from several unrelated places. Sam Harris interviewed Neil deGrasse Tyson, and they talked a little about recognizing alien intelligence and the rabbit hole of postmodernist interpretations of knowledge (more on this later). Daniel Kaufman talked with Massimo Pigliucci about philosophy of math.

We’ll start with a fundamental fact that must be acknowledged: we’ve actually figured some things out. In other words, knowledge is possible. Maybe there are some really, really, really minor details that aren’t quite right, but the fact that you are reading this blog post on a fancy computer is proof that we aren’t just wandering aimlessly in the dark when it comes to the circuitry of a computer. Science has succeeded in many places, and it remains the only reliable way to generate knowledge at this point in human history.

Skepticism is the backbone of science, but there is a postmodernist rabbit hole one can get sucked into by taking it too far. I won’t make the standard rebuttals to radical skepticism, but instead I’ll make an appeal to ethics. I’ve written about this many times, two of which are here and here. It is basically a variation on Clifford’s paper The Ethics of Belief.

The short form is that good people will do good things if they have good information, but good people will often do bad things unintentionally if they have bad information. Thus it is an ethical imperative to always strive for truth and knowledge.

I’ll illuminate what I mean with an example. The anti-vaccine people have their hearts in the right place. They don’t intend to cause harm. They actually think that vaccines are harmful, so it is the bad information causing them act unethically. I picked this example, because it exemplifies the main problem I wanted to get to.

It is actually very difficult to criticize their arguments in general terms. They are skeptical of the science for reasons that are usually good. They claim big corporations stand to lose a lot of money, so they are covering up the truth. Typically, this is one of the times it is good to question the science, because there are actual examples where money has led to bad science in the past. Since I already mentioned Neil deGrasse Tyson, I’ll quote him for how to think about this.

“A skeptic will question claims, then embrace the evidence. A denier will question claims, then deny the evidence.”

This type of thing can be scary when we, as non-experts, still have to figure out what is true or risk unintentional harm in less clear-cut examples. No one has time to examine all of the evidence for every issue to figure out what to embrace. So we have to rely on experts to tell us what the evidence says. But then the skeptic chimes in and says, but an appeal to authority is a logical fallacy and those experts are paid by people that cause a conflict of interest.

Ah! What is one to do? My answer is to go back to our starting point. Science actually works for discovering knowledge. Deferring to scientific consensus on issues is the ethically responsible thing to do. If they are wrong, it is almost certainly going to be an expert within the field that finds the errors and corrects them. It is highly unlikely that some Hollywood actor has discovered a giant conspiracy and also has the time and training to debunk the scientific papers that go against them.

Science has been wrong; anything is possible, but one must go with what is probable.

I said this post would be a mess and brought up philosophy of math at the start, so how does that have anything to do with what I just wrote? Maybe nothing, but it’s connected in my mind in a vague way.

Some people think mathematical objects are inherent in nature. They “actually exist” in some sense. This is called Platonism. Other people think math is just an arbitrary game where we manipulate symbols according to rules we’ve made up. I tend to take the embodied mind philosophy of math as developed by Lakoff and Nunez.

They claim that mathematics itself is purely a construct of our embodied minds, but it isn’t an “arbitrary” set of rules like chess. We’ve struck upon axioms (Peano or otherwise) and logic that correspond to how we perceive the world. This is why it is useful in the real world.

To put it more bluntly: Aliens, whose embodied experience of the world might be entirely different, might strike upon an entirely different mathematics that we might not even recognize as such but be equally effective at describing the world as they perceive it. Therefore, math is not mind independent or even universal among all intelligent minds, but is still useful.

To tie this back to the original point, I was wondering if we would even recognize aliens as intelligent if their way of expressing it was so different from our own that their math couldn’t even be recognized as such to us. Would they be able to express true knowledge that was inaccessible to us? What does this mean in relation to the ethics of belief?

Anyway, I’m thinking about making this a series on the blog. Maybe I’ll call it RRR: Random Running Ramblings, where I post random questions that occur to me while listening to something while running.

Arguments on Religious Exemptions to Nondiscrimination Law

[This post is a day late. Excuses: I got bogged down looking these laws up. I accidentally had “insert” on, and I deleted large chunks without realizing it.]

It’s been in the news recently that a few states have (re)issued “Religious Freedom Laws.” The most recent being Mississippi’s Protecting Freedom of Conscience from Government Discrimination Act. I won’t get into the details on this particular act but instead try to present some arguments from both sides of this debate that I think are good and bad.

Like most liberals, I tend to react from my gut and conclude these things are horrible. This is mostly an attempt to take a step back and figure out  the actual arguments. I’ll take a philosophical approach and assume for the sake of argument that the laws are written in a reasonable way to achieve their intended goal rather than pick apart the specific language of any one of them.

This means we’ll assume that there is a (local) law that says public accommodations must serve people regardless of race, sex, sexual orientation, etc, and that the newer religious freedom law exempts people from following the nondiscrimination policy based on a “deeply held religious belief.”

Good pro argument: It’s not a big deal for people who are denied service for any reason to go somewhere else. I actually think this is a pretty good argument. When I was planning my wedding, I think back to how I would have felt if someone would have said, “You know, we’re family owned, and we prefer not to do gay weddings for religious reasons. We can put you in contact with several other local bakeries that do them instead if you don’t mind.”

Wedding plans have lots of setbacks. This one would be the least of my worries. I find it hard to care that much. I’d think they were homophobic jerks, but if the laws were written in a way that said: if you deny service for a religious reason, you must provide contact information for a service of equal quality within a reasonable distance, that seems to be a good trade-off. In order to discriminate, they have to advertise for their main competition. If there isn’t competition, they wouldn’t be allowed the exception.

Good anti argument: It’s not a big deal for the provider to just provide the service. Part of being an adult who engages the public is to deal with people you don’t like or who make you uncomfortable.

Let’s take two examples. The first is the clerk who hands the documentation to the gay couple. This should be easy. It’s just a piece of paper. I don’t even think that clerk has to sign it. They literally just hand it to you. That person isn’t condoning anything or celebrating anything or participating in anything. In other words, it’s just not a big deal to provide that service regardless of religious belief.

A slightly trickier example is baking a cake for a wedding that “goes against your religion.” Let’s remove the gay wedding from the example. This becomes a question of whether the person has developed the normal adult cognitive faculty of separating a pure business transaction from their personal life. It’s childish to care so much about how a cake is going to be used.

I’m sure there are Catholic bakers who believe getting remarried without an annulment is a sin. Or heck, they probably believe all marriages should be through the Catholic Church. Yet they manage to provide cakes for all sorts of weddings that aren’t a part of their religion.

My guess is that they have the ability to forget about it once it is baked. If it bothers you so much, stop thinking about it so much. The cake baker is not “participating” in the wedding. They are not “condoning” or “approving” of the wedding. They won’t even be attending. The ego needed to think so highly of their service is staggering.

I maintain that this is true for every scenario the law is intended to cover. Grow up. It’s not a big deal to provide the service. Sometimes, in real life, you have to deal with people you don’t like. That’s just part of running a business. It’s special pleading to get a law to shield you from these people.

Consistent libertarian pro argument: All services should be allowed to discriminate however they want. It’s 2016! The market will weed out the discriminatory services fairly quickly, because everyone will boycott them for discriminating. Then, without any government intervention, we will be in the same situation that the anti side wanted.

I have no idea whether this is empirically true, but I respect the consistency of the argument. This brings up a bad argument on the pro side. Some people say the law is okay because it has “targeted language.” This is a horrible argument. It is an attempt to get around the slippery slope of allowing all people exemptions to all things based on vague “religious beliefs.” But being targeted is admitting that the law is specifically designed to legally discriminate against one targeted group. That is unacceptable. Religious bigotry cannot be written into law. The consistent libertarian pro argument is much better.

Slippery slope anti argument: Allowing religious exemptions will lead to chaos. The language of “deeply held religious belief” is too vague. That could mean anything. Maybe it’s my deeply held religious belief that it’s a sin for Asians to eat pork. Am I allowed to deny them service at my all-pork restaurant based on that?

This gets back to the libertarian argument. I think if the pro side wants to be consistent, they have to say this is allowed. The fact that they won’t go this far is a sign that their argument isn’t very good if made in this way. So ultimately I think I have to come down on the anti side unless there was solid empirical evidence that the libertarian argument would work.

 

On Modern Censorship

I don’t want to wade into the heavy politics of things like GamerGate, MetalGate, and so on, but those movements certainly got me thinking about these issues a few months ago. A few days ago, I read a New York Times article about twitter shaming people out of their careers over basically nothing. This brought some clarity to my thoughts on the issue.

I’ll try to keep examples abstract, at the cost of readability, to not spur the wrath of either side. I haven’t done a post on ethics in a while, and this is an interesting and difficult subject.

First, let me say there are clear cases where censorship is good. For example, children should not be allowed to watch pornography (of course, there could be a dispute over the age where this becomes blurry, but everyone has an age where it is too young). There are also clear cases where censorship is bad. For example, a group of concerned Christian parents succeeds in a petition to ban Harry Potter from their children’s school.

Many arguments about censorship boil down to this question of societal harm. To start our thought experiment, let’s get rid of that complication and assume that whatever work is in question is fine. In other words, we will assume that censorship is bad in the sense that the marketplace of ideas should be free. If something offends you, then don’t engage with it. You shouldn’t go out of your way to make it so no one can engage with it.

In the recent controversies, there has been an underlying meta-dialogue that goes something like this:

Person A: If you don’t like the sexism/racism/homophobia/etc (SRHE) in this game/book/movie/etc (GBME), then don’t get the media. Stop trying to censor it so that I can’t engage with it. I happen to enjoy it.

Person B: I’m not trying to censor anything. I’m just raising social awareness as to the SRHE. It is through media that these types of things are perpetuated, and the first step to lessen this is to raise awareness.

What made this issue so difficult for me is that I understand both points of view. Person A is reiterating the idea that if you don’t like something, then don’t engage with it. There is no need to ruin it for everyone else. It is also hard to argue with Person B if they are sincere. Maybe they agree that censorship is bad, but they want to raise awareness as to why they don’t like the media in question.

The main point of this post is to present a thought experiment where Person B is clearly in the wrong. The reason to do this is that I think the discussion often misses a vital point: in our modern age of twitter storms and online petitions, Person B can commit what might be called “negligent censorship.” Just like in law, negligence is not an excuse that absolves you of the ethical consequences of censoring something.

Thought experiment: Small Company starts making its GBME. In order to fund the project, they get the support of Large Company that is well-known for its progressive values. In the age of the internet, news of this new GBME circulates early.

Person B happens to be a prominent blogger and notices some SRHE in the GBME. Note, for the purposes of this discussion, it doesn’t really matter whether the SRHE is real or imagined (though, full disclosure, I personally believe that people whose job it is to sniff out SRHE in media tend to exaggerate [possibly subconsciously] SRHE to find it where it maybe doesn’t really exist).

Let’s make this very clear cut. Person B knows that they can throw their weight around enough to get a big enough twitter storm to scare the Large Company backer out of funding the Small Company’s project. Person B does this, and sure enough, the project collapses and never gets finished or released.

This is clear censorship. Person B acted with the intent to squash the GBME. Sadly, Person B can still claim the nobler argument given earlier, and it is hard to argue against that. I think this is part of what infuriates Person A so much. You can’t prove their interior motivation was malicious.

But I think you don’t need to. Now let’s assume Person B does all of this with the good-natured intention of merely “raising awareness.” The same outcome occurs. Your intent shouldn’t matter, because your actions led to the censorship (and also hurt the livelihood of some people which has its own set of moral issues).

If you write something false about someone that leads to their harm, even if you didn’t realize it, you can still be charged with libel. Negligence is not an excuse. I’m not saying it is a crime to do what Person B did (for example, the SRHE may actually be there so the statements Person B made were true). I’m only making an analogy for thinking about negligence.

You can claim you only were trying to raise awareness, but I claim that you are ethically still responsible. This is especially true now that we’ve seen this happen in real life many times. If Person B is an adult, they should know writing such things often has this effect.

To summarize, if you find yourself on Person B’s side a lot, try to get inside the head of Small Company for a second. Whether intended or not, Person B caused their collapse. It is not an excuse to say Small Company should have been more sensitive to the SRHE in their GBME if they wanted to stay afloat.

This is blaming the victim. If Large Company said upfront they wouldn’t back the project if Small Company made their proposed GBME, it would be Small Company’s fault for taking the risk. If a group of people who don’t agree with the content of the GBME cause it to collapse, it is (possibly negligent) censorship.

Under our assumption that censorship is bad, I think Person B has serious ethical issues and Person A is clearly in the right. The problem is that in real life, Person B tries to absolve their wrong by implicitly appealing to a utilitarian argument.

A (non-malicious) Person B will truly believe that the short term harm of censoring is outbalanced by the long-term good of fighting SRHE. If the evidence was perfectly clear about the causation/correlation between SRHE in mass media and real life, Person B would have a pretty good ethical argument for their position.

What makes this such a contested issue is that we are in some middle ground. There is correlation, which may or may not be significant. But who knows about causation. Maybe it is the other way around. The SRHE in society is coming out in art, because it is present in society: not the other way around that Person B claims.

This is why, even though, with my progressive values, I am highly sympathetic to the arguments and sentiments of Person B, I have to side with Person A most of the time. Person B has a moral responsibility to make sure they raise awareness in a way that does not accidentally lead to censorship. This has become an almost impossible task with our scandal obsessed social media.

For the debates to calm down a bit, I think side B has to understand side A a bit better. I think most people on side A understand the concerns of side B, but they just don’t buy the argument. Many prominent speakers on side B dismiss side A as a bunch of immature white boys who don’t understand their media has SRHE in it. Side B needs to realize that there is a complicated ethical argument against their side, even if it rarely gets articulated.

I’m obviously not calling for self-censorship (which is always the catch-22 of speaking about these issues), but being a public figure comes with certain responsibilities. Here are the types of things I think a prominent writer on SRHE issues should think more critically about before writing:

1. Do I influence a lot of people’s opinion about SRHE topics? For example, having 200K twitter followers might count here.
2. Do my readers expect me to point out SRHE in GBME on a regular basis? If so, you might be biased towards finding it. Ask someone familiar with the GBME whether you are taking clips or quotes out of context to strengthen your claims before making a public accusation.
3. Are my words merely bringing awareness to an issue, or am I also making a call to action to censor the GBME?

Clifford’s Ethics of Belief

Last post I took as a starting point the fact that people should want to hold true beliefs. It turns out that W. K. Clifford (yes, of Clifford fame in mathematics) wrote a famous essay in 1876 on the ethical implications of this idea called The Ethics of Belief. In general, the essay argues that it is immoral to hold beliefs which cannot be verified through sufficient evidence. I won’t go into his epistemology because I think we have better foundations such as that presented in the previous post.

The argument essentially boils down to making a case based on an example (or thought-experiment if you will). I’ll give some modern day real examples to point out that his idea seems to be warranted. Let it be said that Clifford presents the example in much more poetic language (well worth reading in my opinion: full copy here). To prevent the post from going on too long, I’m going to just distill out the key points.

John is an immigration ship-owner (in the late 19th century). He knows his ship is old and not well-built. He knows it probably needs repairs from its many journeys. The key point of this setup is that there is enough evidence here to cast some legitimate doubt on its seaworthiness. Still his cognitive biases started flaring when thinking about the time and money it would take to do repairs.

John starts rationalizing away his fears. He knows that the ship had made the journey many times, so why suspect it wouldn’t make it this time? He has faith that God would protect those innocent people on the journey. He knows the repair people were overstating the problems just to scam him for money. And so on. He comes to be sure that the ship is safe for travel.

As presented above it looks like John had control over his biases and intentionally argued himself into a position that was easier and cheaper for himself at the possible cost of other people’s lives. This is not the case at all. We know that cognitive biases as above work without our knowledge (see the previous post). In this thought experiment John really truly believes he has a correct belief that the ship is seaworthy, and he does not know that he came to this belief through faulty means.

Everyone knows how this story ends. The ship sinks in a storm and lots of innocent people die. Now we have a difficult moral dilemma to unpack. Is John morally responsible for their deaths? Clifford argues that he is. He argues that it is a moral responsibility to rigorously examine available evidence to come to a belief that is most likely to be true.

Clifford then alters the scenario and allows the ship to continuously keep making journeys successfully and the faulty belief never causes harm. He argues that it is still immoral for John to hold a belief that would not be supported by rigorous examination of the evidence. This is because we have no idea when our faulty beliefs will cause harm. It is a moral responsibility for us to keep intentionally casting doubt on our held beliefs to seriously undergo a reexamination.

In the case that the ship continues to make safe journeys, it is actually doing good in helping impoverished people immigrate to make a better life (or at least we assume so to make a more striking case). Clifford argues that the act of not examining the belief is still immoral. We cannot judge whether or not an action is moral based on accidental consequences even if those consequences produce good for society. To rephrase yet again, the ethics of whether or not it is moral to hold a belief is not based on the truth or falseness of the belief but on whether or not you have sufficiently good reason to believe it is true. It is always immoral to believe something on faith regardless of the good it does.

This has the interesting consequence that it is more moral to hold a false belief on good evidence than a true belief on bad evidence. Even though the next example will take this relatively neutral post to a bit more inflammatory levels, I think it is important to see that the thought experiment of Clifford is not pure ivory tower speculation. There are real people who are genuinely good people attempting to good in the world but whose false beliefs thwart them into doing some truly terrible things.

In 2008, an 11 year old girl named Madeline Neumann collapsed to the floor. She had a treatable form of diabetes. Her parents had plenty of time to go seek medical help and save her, but instead they prayed. They believed that pray would cure her. They watched their daughter die. This is not some random isolated incident. These types of deaths happen all the time and for good reason. If you truly believe that prayer works, then this is how you should behave. Madeline’s parents truly believed they were helping their daughter.

If you believe prayer works, but you wouldn’t behave in this way, then you need to take a serious look at your belief that it works. Clifford would say that you have just as much moral guilt as the parents of Madeline. The belief may never cause real harm for you, but the random accidental consequences of a belief are not how we judge whether or not holding the belief is moral. If you wouldn’t behave as Madeline’s parents, then you probably don’t truly believe prayer works, but you just haven’t examined it close enough to overcome the societal pressure of whatever community you belong to.

Clifford himself sums up nicely:

To sum up: it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.

…”But,” says one, “I am a busy man; I have no time for the long course of study which would be necessary to make me in any degree a competent judge of certain questions, or even able to understand the nature of the arguments.”

Then he should have no time to believe.

Mathematical ethicists?

I should make it a vow of the new year to not get into ethical debates with mathematicians. I’m not going to lie. Mathematicians seem to be much harder to argue with than philosophers. They seem to be much more rigorous, they accept much fewer axioms, and are willing to admit to craziness just to not present a contradiction with their side.

Where is this all coming from? Well, on Friday I got into a few debates with a certain person. The most interesting of which was on whether or not agnosticism is a defensible position. This ended up continuing today with a different person. Amazingly enough, the characterizations given above apply equally well to both of these people, yet they both argued different points (I stayed on my one).

It went a little bit like the following. I claim that agnosticism is not a defensible position. In my brief encounters with the subject, it seems as if mathematicians tend to believe that it is the only position that you can defend from a rational standpoint. We can never know whether there is a God or not, and hence we cannot make a definitive statement one way or the other.

I claim this is nonsense. An agnostic formulates definitive beliefs about the world all the time instead of claiming to “not know.” Most contradictorily being that they are not agnostic with respect to most gods created in human history. Most agnostic are actually atheists with respect to Greek or Norse gods. Less contradictorily, but even more absurd is that an agnostic that actually practices what they preach should say that we cannot know whether or not an invisible pink unicorn follows them around everywhere they go. We have no evidence for or against it, thus the truth of its existence cannot be known for sure. (You may as well consider this line of reasoning to be a variant on Okham’s Razor).

Alright, well that is just one of my arguments, but it turns out that one of the two people were actually willing to agree to not knowing things like the pink unicorn. So on further pressing, like whether or not my eyes were deceiving me and instead of solid ground three feet in front of me there was actually a cliff, he admitted we could not know that for sure as well.

At this point you may be wondering what this has to do with ethics. And here it is. I decided to shift to an ethical argument. I say that an agnostic must ethically make the shift to atheism, because to not do so is to endorse unethical behavior. A person goes and kills someone and says that God told them to do it. The agnostic has to accept that this is possible. In fact, we could use the old standard of the categorical imperative and think about a world of all agnostics. Someone is on trial for a murder. Their case is that God told them to do it. They must be let off free. What if they are telling the truth? Who are we humans to condemn someone carrying out God’s command? Thus I claim that agnosticism tacitly supports an ideological system that allows for immoral behavior to be confused with moral behavior.

The person on Friday bought this argument, but then decided that the same case could be made against atheism. I don’t wish to go into detail, since it completely changes the topic, but essentially the tangent topic dealt with moral relativism vs absolutism and whether or not a case could objectively be made against nihilism (as you may be able to piece together, the argument was that a purely absolute ethics cannot exist, so an atheist system of ethics tacitly supports a nihilistic ethics which devalues human life unless there is a sound argument against it). I’m still thinking about it, but it is a harder case to make.

As I’ve probably stated in the past. My general view is that an absolute ethics does exist, and we can know parts of it, but in general we will probably never know all of it.