Critical Postmodern Readings, Part 3: Baudrillard

Jean Baudrillard is one of those postmodernist philosophers that people can name but probably don’t know much about. He’s most famous for his work Simulacra and Simulation, in which he argues we’ve replaced everything real in our society by symbols (more on this later). If you’re thinking of the movie The Matrix, then you’ve understood. That movie gets misattributed to a lot of different philosophers, but the underlying concept is basically a fictionalization of Baudrillard’s ideas.

I thought we’d tackle his paper “The Masses: The Implosion of the Social in the Media” published in New Literary History, Vol 16, No 3. The paper appeared in 1985, and it tackles issues relevant to our current media environment. I thought it’d be interesting to see how it holds up now.

He begins with an observation about the media:

…they are what finally forbids response, what renders impossible any process of exchange (except in the shape of a simulation of a response, which is itself integrated into the process of emission, and that changes nothing in the unilaterality of communication).

In the 80’s, as well as traditional media today, this is certainly true. There’s no way to comment on or engage in a dialogue with the people presenting information on TV or radio or even podcasts or newspapers and blogs with closed comments. Traditionally, media gets to define the conversation, and when there is “response” to what they say, it’s still controlled by them, and they still distribute that response to you.

Baudrillard wants to frame this as a power imbalance. The media have a monopoly on information. When a response is allowed, the exchange of ideas becomes more balanced.

Baudrillard brings up the case of an opinion poll as an example to motivate the next part of the paper. He points out that this distribution of information is merely symbolic of the state of opinion. There is a complicated interaction where the information itself changes opinion, rendering itself obsolete. This type of distribution of information introduces uncertainty on many fronts:

We will never know if an advertisement or opinion poll has had a real influence on individual or collective wills—but we will never know either what would have happened if there had been no opinion poll or advertisement.

Here, I have to say this analysis is a bit dated. This statement was probably accurate in the 80’s, but with Google, and other analytic big data companies, tracking so much of our lives, we can be quite certain if certain advertisements or polls have caused some sort of influence on both individual and collective wills.

This point is mostly not important to the overall thesis of Baudrillard in the article, though. He goes on to make an astute observation that can cause a bit of anxiety if you dwell on it too much. We don’t have a good way to separate reality from the “simulative projection in the media.”

It’s a complicated way to say that we just can’t check a lot of things. If we see on the news that there was a minor earthquake in Japan, we believe it. But we weren’t there. All we get is the simulation of reality as provided by the news. Of course, there are other ways to check that fact by going into public seismic activity records, etc.

But there are other narratives and simulations that are much harder to check, and in any case, we are bombarded by so much information that we don’t have time to check it. We believe the narrative as presented. If we come across a competing narrative, we only become uncertain. It doesn’t actually clarify the situation (here we get back into Lyotard territory).

Baudrillard would later write a book-length analysis of this about the Gulf War (entitled The Gulf War Did Not Take Place) in which he claims that the American public only received propaganda about the war through the media. The war took place, but the simulated reality the public received did not accurately reflect the events that occurred. Moreover, there were pretty much no sources outside this propaganda to learn about the actual events.

We live in an age of hyperinformation, and the more we track how everything is changing, the worse our understanding gets. This isn’t Baudrillard’s wording, but I can see how this makes sense: we confuse noise for signal when we pay too close attention. We also get trapped in our own little information bubbles when we pay too close attention. “Hyperinformation” (his term) can lead to more uncertainty, not less.

I think we’ve come to a point where hyperinformation is at least somewhat good. Yes, for the reasons listed, it can be paralyzing if you want the truth. But at the same time, it means the truth might be out there to discover. We don’t only get the corporate media narrative now. There are independent reporters and journalists working hard to present viable alternatives. It isn’t hopeless to see through the noise now (as it was back in the 80’s).

Baudrillard says we can get out of the despair of all this by treating it like a “game of irresponsiblity, of ironic challenge, of sovereign lack of will, of secret ruse.” The media manipulates and the masses resist, or better yet, respond.

I’ll just reiterate that what Baudrillard identifies as the central problem here has been partially solved in modern day. The masses have twitter and facebook and comments sections and their own blogs and youtube channels. The masses have a way to speak back now. Unfortunately, this has opened up a whole new set of problems, and I wish Baudrillard were still around. He’d probably have some interesting things to say about it.

Now that I’ve been doing this Critical Postmodern Reading series, I’m coming to believe these postmodernists were maligned unjustly. I’m coming to believe we should keep two terms distinct. The “postmodernist philosopher” analyzes the issues of the postmodern condition. The “postmodern academic” utilizes the confusion brought on by the postmodern condition to push their own narrative.

It’s easy to look at the surface of Baudrillard and claim he’s some crackpot history denier that thinks there’s no such thing as objective reality so we all make our own truth.

But if you read him carefully, he seems to be saying some very important true things. He thinks there is an objective, true reality, and it’s dangerous that we all simulate different versions of it (i.e. we filter the news through an algorithm that tells us the world is how we think it is). The truth gets hijacked by narratives. He sees the monopoly the media has on these narratives as damaging and even simulating a false reality.

His writing doesn’t even slip into incomprehensible, postmodernist jargon to obscure the argument. I thought this article was illuminating despite and comprehensible. The only parts that don’t still feel applicable are where he didn’t predict how technology would go.

Advertisements

Critical Postmodern Readings, Part 2: Finishing Lyotard

Last time we looked at the introduction to Lyotard’s The Postmodern Condition: A Report on Knowledge. That introduction already contained much of what gets fleshed out in the rest of the short book, so I’m going to mostly summarize stuff until we hit anything that requires serious critical thought.

The first chapter goes into how computers have changed the way we view knowledge. It was probably an excellent insight that required argument at the time. Now it’s obvious to everyone. Humans used to gain knowledge by reading books and talking to each other. It was a somewhat qualitative experience. The nature of knowledge has shifted with (big) data and machine learning. It’s very quantitative. It’s also a commodity to be bought and sold (think Facebook/Google).

It is a little creepy to understand Lyotard’s prescience. He basically predicts that multinational corporations will have the money to buy this data, and owning the data gives them real-world power. He predicts knowledge “circulation” in a similar way to money circulation.  Here’s a part of the prediction:

The reopening of the world market, a return to vigorous economic competition, the breakdown of the hegemony of American capitalism, the decline of the socialist alternative, a probable opening of the Chinese markets …

Other than the decline of the socialist alternative (which seems to have had a recent surge), Lyotard has a perfect prediction of how computerization of knowledge actually affected the world in the 40 years since he wrote this.

Chapter two reiterates the idea that scientific knowledge (i.e. the type discussed above) is different than, and in conflict with, “narrative” knowledge. There is also a legitimation “problem” in science. The community as a whole must choose gatekeepers seen as legitimate who decide what counts as scientific knowledge.

I’ve written about why I don’t see this as a problem like Lyotard does, but I’ll concede the point that there is a legitimation that happens, and it could be a problem if those gatekeepers change the narrative to influence what is thought of as true. There are even known instances of political biases making their way into schools of scientific thought (see my review of Galileo’s Middle Finger by Alice Dreger).

Next Lyotard sets up the framework for thinking about this. He uses Wittgenstein’s “language game” concept. The rules of the game can never legitmate themselves. Even small modifications of the rules can greatly alter meaning. And lastly (I think this is where he differs from Wittgenstein), each speech act is an attempt to alter the rules. Since agreeing upon the current set of rules is a social contract, it is necessary to understand the “nature of social bonds.”

This part gets a little weird to me. He claims that classically society has been seen either as a unified whole or divided in two. The rules of the language games in a unified whole follow standard entropy (they get more complicated and chaotic and degenerate). The divided in two conception is classic Marxism (bourgeoisie/proletariat).

Even if it gets a bit on the mumbo-jumbo side through this part, I think his main point is summarized by this quote:

For it is impossible to know what the state of knowledge is—in other words, the problems its development and distribution are facing today—without knowing something of the society within which it is situated.

This doesn’t seem that controversial to me considering I’ve already admitted that certain powers can control the language and flow of knowledge. Being as generous as possible here, I think he’s just saying we have to know how many of these powers there are and who has the power and who legitimated that power before we can truly understand who’s forming these narratives and why.

In the postmodern world, we have a ton of different institutions all competing for their metanarrative to be heard. Society is more fractured than just the two divisions of the modern world. But each of these institutions also has a set of rules for their language games that constrains them.  For example, the language of prayer has a different set of rules from an academic discussion at a university.

Chapters 7-9 seem to me to be where the most confusion on both the part of Lyotard and the reader can occur. He dives into the concept of narrative truth and scientific truth. You can already feel Lyotard try to position scientific truth to be less valuable than it is and narrative truth more valuable.

Lyotard brings up the classic objections to verification and falsification (namely a variant on Hume’s Problem of Induction). How does one prove ones proof and evidence of a theory is true? How does one know the laws of nature are consistent across time and space? How can one say that a (scientific) theory is true merely because it cannot be falsified?

These were much more powerful objections in Lyotard’s time, but much of science now takes a Bayesian epistemology (even if they don’t admit to this terminology). We believe what is most probable, and we’re open to changing our minds if the evidence leads in that direction. I addressed this more fully a few years ago in my post: Does Bayesian Epistemology Suffer Foundational Problems?

… drawing a parallel between science and nonscientific (narrative) knowledge helps us understand, or at least sense, that the former’s existence is no more—and no less—necessary than the latter’s.

These sorts of statements are where things get tricky for me. I buy the argument that narrative knowledge is important. One can read James Baldwin and gain knowledge and empathy of a gay black man’s perspective that changes your life and the way you see the world. Or maybe you read Butler’s performative theory of gender and suddenly understand your own gender expression in a new way. Both of these types of narrative knowledge could even be argued to be a “necessary” and vital part of humanity.

I also agree science is a separate type of knowledge, but I also see science as clearly more necessary than narrative knowledge. If we lost all of James Baldwin’s writings tomorrow, it would be a tragedy. If we lost the polio vaccine tomorrow, it would be potentially catastrophic.

It’s too easy to philosophize science into this abstract pursuit and forget just how many aspects of your life it touches (your computer, the electricity in your house, the way you cook, the way you get your food, the way you clean yourself). Probably 80% of the developed world would literally die off in a few months if scientific knowledge disappeared.

I’ll reiterate that Lyotard thinks science is vastly important. He is in no way saying the problems of science are crippling. The above quote is more in raising narrative knowledge to the same importance of science than the devaluing of science (Lyotard might point to the disastrous consequences that happened as a result of convincing a nation of the narrative that the Aryan race is superior). For example, he says:

Today the problem of legitimation is no longer considered a failing of the language game of science. It would be more accurate to say that it has itself been legitimated as a problem, that is, as a heuristic driving force.

Anyway, getting back to the main point. Lyotard points out that problems of legitimating knowledge is essentially modern, and though we should be aware of the difficulties, we shouldn’t be too concerned with it. The postmodern problem is the grand delegitimation of various narratives (and one can’t help but hear Trump yell “Fake News” while reading this section of Lyotard).

Lyotard spends several sections developing a theory of how humans do science, and he develops the language of “performativity.” It all seems pretty accurate to me, and not really worth commenting on (i.e. it’s just a description). He goes into the issues Godel’s Incompleteness Theorem caused for positivists. He talks about the Bourbaki group. He talks about the seeming paradox of having to look for counterexamples while simultaneously trying to prove the statement to be true.

I’d say the most surprising thing is that he gets this stuff right. You often hear about postmodernists hijacking math/science to make their mumbo-jumbo sound more rigorous. He brings up Brownian motion and modeling discontinuous phenomena with differentiable functions to ease analysis and how the Koch curve has a non-whole number dimension. These were all explained without error and without claiming they imply things they don’t imply.

Lyotard wants to call these unintuitive and bizarre narratives about the world that come from weird scientific and mathematical facts “postmodern science.” Maybe it’s because we’ve had over forty more years to digest this, but I say: why bother? To me, this is the power of science. The best summary I can come up with is this:

Narrative knowledge must be convincing as a narrative; science is convincing despite the unconvincing narrative it suggests (think of the EPR paradox in quantum mechanics or even the germ theory of disease when it was first suggested).

I know I riffed a bit harder on the science stuff than a graduate seminar on the book would. Overall, I thought this was an excellent read. It seems more relevant now than when it was written, because it cautions about the dangers of powerful organizations buying a bunch of data and using that to craft narratives we want to hear while deligitimating narratives that hurt them (but which might be true).

We know now that this shouldn’t be a futuristic, dystopian fear (as it was in Lyotard’s time). It’s really happening with targeted advertising and the rise of government propaganda and illegitimate news sources propagating our social media feeds. We believe what the people with money want us to believe, and it’s impossible to free ourselves from it until we understand the situation with the same level of clarity that Lyotard did.

On Modern Censorship

I don’t want to wade into the heavy politics of things like GamerGate, MetalGate, and so on, but those movements certainly got me thinking about these issues a few months ago. A few days ago, I read a New York Times article about twitter shaming people out of their careers over basically nothing. This brought some clarity to my thoughts on the issue.

I’ll try to keep examples abstract, at the cost of readability, to not spur the wrath of either side. I haven’t done a post on ethics in a while, and this is an interesting and difficult subject.

First, let me say there are clear cases where censorship is good. For example, children should not be allowed to watch pornography (of course, there could be a dispute over the age where this becomes blurry, but everyone has an age where it is too young). There are also clear cases where censorship is bad. For example, a group of concerned Christian parents succeeds in a petition to ban Harry Potter from their children’s school.

Many arguments about censorship boil down to this question of societal harm. To start our thought experiment, let’s get rid of that complication and assume that whatever work is in question is fine. In other words, we will assume that censorship is bad in the sense that the marketplace of ideas should be free. If something offends you, then don’t engage with it. You shouldn’t go out of your way to make it so no one can engage with it.

In the recent controversies, there has been an underlying meta-dialogue that goes something like this:

Person A: If you don’t like the sexism/racism/homophobia/etc (SRHE) in this game/book/movie/etc (GBME), then don’t get the media. Stop trying to censor it so that I can’t engage with it. I happen to enjoy it.

Person B: I’m not trying to censor anything. I’m just raising social awareness as to the SRHE. It is through media that these types of things are perpetuated, and the first step to lessen this is to raise awareness.

What made this issue so difficult for me is that I understand both points of view. Person A is reiterating the idea that if you don’t like something, then don’t engage with it. There is no need to ruin it for everyone else. It is also hard to argue with Person B if they are sincere. Maybe they agree that censorship is bad, but they want to raise awareness as to why they don’t like the media in question.

The main point of this post is to present a thought experiment where Person B is clearly in the wrong. The reason to do this is that I think the discussion often misses a vital point: in our modern age of twitter storms and online petitions, Person B can commit what might be called “negligent censorship.” Just like in law, negligence is not an excuse that absolves you of the ethical consequences of censoring something.

Thought experiment: Small Company starts making its GBME. In order to fund the project, they get the support of Large Company that is well-known for its progressive values. In the age of the internet, news of this new GBME circulates early.

Person B happens to be a prominent blogger and notices some SRHE in the GBME. Note, for the purposes of this discussion, it doesn’t really matter whether the SRHE is real or imagined (though, full disclosure, I personally believe that people whose job it is to sniff out SRHE in media tend to exaggerate [possibly subconsciously] SRHE to find it where it maybe doesn’t really exist).

Let’s make this very clear cut. Person B knows that they can throw their weight around enough to get a big enough twitter storm to scare the Large Company backer out of funding the Small Company’s project. Person B does this, and sure enough, the project collapses and never gets finished or released.

This is clear censorship. Person B acted with the intent to squash the GBME. Sadly, Person B can still claim the nobler argument given earlier, and it is hard to argue against that. I think this is part of what infuriates Person A so much. You can’t prove their interior motivation was malicious.

But I think you don’t need to. Now let’s assume Person B does all of this with the good-natured intention of merely “raising awareness.” The same outcome occurs. Your intent shouldn’t matter, because your actions led to the censorship (and also hurt the livelihood of some people which has its own set of moral issues).

If you write something false about someone that leads to their harm, even if you didn’t realize it, you can still be charged with libel. Negligence is not an excuse. I’m not saying it is a crime to do what Person B did (for example, the SRHE may actually be there so the statements Person B made were true). I’m only making an analogy for thinking about negligence.

You can claim you only were trying to raise awareness, but I claim that you are ethically still responsible. This is especially true now that we’ve seen this happen in real life many times. If Person B is an adult, they should know writing such things often has this effect.

To summarize, if you find yourself on Person B’s side a lot, try to get inside the head of Small Company for a second. Whether intended or not, Person B caused their collapse. It is not an excuse to say Small Company should have been more sensitive to the SRHE in their GBME if they wanted to stay afloat.

This is blaming the victim. If Large Company said upfront they wouldn’t back the project if Small Company made their proposed GBME, it would be Small Company’s fault for taking the risk. If a group of people who don’t agree with the content of the GBME cause it to collapse, it is (possibly negligent) censorship.

Under our assumption that censorship is bad, I think Person B has serious ethical issues and Person A is clearly in the right. The problem is that in real life, Person B tries to absolve their wrong by implicitly appealing to a utilitarian argument.

A (non-malicious) Person B will truly believe that the short term harm of censoring is outbalanced by the long-term good of fighting SRHE. If the evidence was perfectly clear about the causation/correlation between SRHE in mass media and real life, Person B would have a pretty good ethical argument for their position.

What makes this such a contested issue is that we are in some middle ground. There is correlation, which may or may not be significant. But who knows about causation. Maybe it is the other way around. The SRHE in society is coming out in art, because it is present in society: not the other way around that Person B claims.

This is why, even though, with my progressive values, I am highly sympathetic to the arguments and sentiments of Person B, I have to side with Person A most of the time. Person B has a moral responsibility to make sure they raise awareness in a way that does not accidentally lead to censorship. This has become an almost impossible task with our scandal obsessed social media.

For the debates to calm down a bit, I think side B has to understand side A a bit better. I think most people on side A understand the concerns of side B, but they just don’t buy the argument. Many prominent speakers on side B dismiss side A as a bunch of immature white boys who don’t understand their media has SRHE in it. Side B needs to realize that there is a complicated ethical argument against their side, even if it rarely gets articulated.

I’m obviously not calling for self-censorship (which is always the catch-22 of speaking about these issues), but being a public figure comes with certain responsibilities. Here are the types of things I think a prominent writer on SRHE issues should think more critically about before writing:

1. Do I influence a lot of people’s opinion about SRHE topics? For example, having 200K twitter followers might count here.
2. Do my readers expect me to point out SRHE in GBME on a regular basis? If so, you might be biased towards finding it. Ask someone familiar with the GBME whether you are taking clips or quotes out of context to strengthen your claims before making a public accusation.
3. Are my words merely bringing awareness to an issue, or am I also making a call to action to censor the GBME?

Amazon controversy

Just in case any of my readers aren’t addicted to the internet, and hence have not come across it yet, there is a big controversy going on about Amazon.com. I think it fits the realm of my topics, since it has to do with censorship and literature. I was waiting on this post to see if any new non-hearsay information would be released, but it is taking too long, so….

Amazon has decided to label a whole bunch of LGBT (lesbian, gay, bisexual, and transgender) literature (and I think films as well) as “adult”. Which means they remove the ranking. This may not seem all that important, except the ranking is used in the search function. So now horrifically incorrect things pop up if you search for certain books, and some don’t come up at all.

Now this doesn’t seem like all that bad an idea if you are the type of person that thinks adult material should not pop up when children enter search terms into Amazon, except there is a very clear bias and censorship going on. Very explicit heterosexual material has not been affected as well as some sex toys, yet literature that is either non-fiction about gay people or fiction with subtle gay themes and no sexual scenes at all have been censored out.

Anyway, I don’t care to post on this forever since any quick search will bring up lots of people who know way more about the situation than I, and I need to get back to homework rather than research what is really going on. Just thought I’d keep people up-to-date that don’t have their google reader bombarded with hundreds of items a day.