Year of Required Reading, Part 0

As I said in a previous post, I’m going to make 2019 the “Year of Required Reading.” I want to take books that I was required to read in school and read them again to get a fresh perspective and to evaluate whether I think they are good choices.

I’m posting my list this early so people have time to give some input before I begin. I have some rules for how I’m choosing books. I don’t want to pick something I’ve read in the past ten years.

I’ve gone back to The Great Gatsby, Ethan Frome, A Separate Peace, and many others since school. These won’t make the list.

I also want to put a few “new” books on the list. Many schools have tried to encourage reading with modern YA books. These didn’t exist as required reading back when I was a kid. I’m also curious how these are.

My list so far.

Re-read:

The Grapes of Wrath – John Steinbeck. I decided fear shouldn’t be the reason I avoid a book. And let me tell you, I don’t have good memories of trudging through this as a kid. But I’m optimistic that I’ll appreciate it this time.

To Kill a Mockingbird – Harper Lee. It’s a little embarrassing that I’ve never gone back to this. I’m expecting good things.

The Hunchback of Notre-Dame (Notre-Dame de Paris) – Victor Hugo. Yes, I actually read the full, unabridged version of this book in high school for summer reading. I remember being completely blown away by it. I’d never read anything with such narrative depth and tragedy. It changed how I thought about books. And yet, I never re-read it for some reason.

Newfangled stuff:

Speak – Laurie Halse Anderson. I can’t actually tell if this is “high school” level or lower, but I found this to be the most common contemporary book on the lists I looked up.

Looking for Alaska – John Green. I’m not sure there’s a more famous YA author out there right now. Let’s judge this! (Sorry, I promise to keep an open mind.)

Monster – Walter Dean Myers. This one is on here for me. It sounds seriously dark. It’s one of the only contemporary books on these lists that I found appealing based on the premise.

And that’s it so far. I only plan on probably one more in each category.

My knowledge of Shakespeare is quite weak. I’m tempted to add one of his plays. Let me know what you think. There’s still plenty of time to change any of these and add stuff.

Previous years:

Year of Giant Novels (2016)

Year of Short Fiction (2017)

Year of Mystery Novels (2018)

Advertisements

Social Media’s Negative Effect on Prose Style

I’ve noticed this strange trend in the prose style of novels, and I’ve wondered why it’s happening. I finally realized that the prevalence of social media is almost certainly the culprit.

I won’t try to write a whole thesis on the history of this type of thing, but I think it’s pretty commonly accepted that social trends can be a big influence on prose style.

Dickens was wordy because of how serialization worked. The modernists produced intentionally difficult and confusing writing to show how difficult and confusing the world was after the world wars.

And so on.

Here’s the old rule that’s been changing:

Show emotions using subtext.

Stating emotions outright is considered a bit of a faux pas for a reason. It’s a special case of the “show, don’t tell” rule.

When I first started working on my fiction, this came up everywhere. There’s a time and place where it’s acceptable to write: “Joe was mad.” But mostly, this is the result of lazy or untrained writing.

One can show so much more nuance with subtext. Why is he mad? How mad is he? How is it affecting his actions and choices?

Not to mention, showing the emotion makes Joe more human and three-dimensional, because there will be complexity and confusion about the answers to all the questions. It can also turn a passive statement of fact into action.

Joe threw the book across the room at Susie’s head.

The reader can interpret their own emotions into this. Most of the time when I see this rule violated, it’s because the writer doesn’t trust the reader. It shows up in addition to the action:

Susie plopped down next to Robert and placed her hand on his leg. She wanted to upset Joe by flirting with someone else. She locked eyes with Joe and gave him a seductive smile. Joe threw the book across the room at Susie’s head, because he was mad.

Yeah. All that telling can be read into the scene given the context of the rest of the novel or story. Here’s what it sounds like removed:

Susie plopped down next to Robert and placed her hand on his leg. She locked eyes with Joe and gave him a seductive smile. Joe threw the book across the room at Susie’s head.

In real life, I’d try to flesh it out a bit more, but you see the point. Now, I obviously can’t prove that social media has caused this shift, but it makes perfect sense. As a culture, we’re inundated with people’s status updates.

Most of the stories we read are people blatantly telling us their emotions with no subtext. I did a search for the word “mad” on Twitter. Here’s the most recent one:

This is pretty typical. Social media posts tend to be a raw, unfiltered dump of how someone feels. Most social media posts are accompanied by a picture or video. This does the hard work of apt description for the writer.

I don’t want to come off as saying this is “bad.” Social media posts are a completely different medium and serve a different purpose than long-form writing. It’s okay to have different conventions.

What I am saying is that these conventions have started to bleed into longer writing merely because it is what we’re used to now. Very few people read blogs or books or magazines or newspapers these days.

Almost all reading is now done from social media posts.

I have no statistics to back up such a claim, but leisure reading is at an all-time low and the average American spends over 2 hours a day on social media. So I’d say it’s likely true.

I know what you’re thinking: 99% of everything is crap. I shouldn’t jump to conclusions about trends in case I’m reading a string of crap that no one thinks of as good. Well, the inspiration for this post came from reviewer trends.

Many prominent reviewers in the genres I follow have written that certain books have “good writing” when these blatant “mistakes” are all over the book. It got me wondering why reviewers have been desensitized to the emotion dump and have even come to think of it as good writing.

Unfortunately, I blame social media.

Bullet Journal: Becoming Intentional

I do something called Bullet Journaling. I’ve done it for several years as a way to stay organized. If you look this up and you’ve never heard of it before, you’ll probably be overwhelmed by how complicated it is.

But it only looks that way. Once you do it for a few months, you start to see how simple and beautiful the system is.

The word “journal” is a bit confusing. It’s not a place where I write my feelings or whatever. A bullet journal should be thought of more like a highly efficient planner designed to help you achieve large, unmanageable goals by breaking them into simple tasks.

I couldn’t imagine writing a novel without this method anymore.

What does this have to do with intention?

Intention is one of those concepts that got a bad reputation from New Age gurus of the 90’s. I can almost hear Deepak Chopra saying something like: “Set an intention for the day and it will be manifested.”

That’s not quite what I’m referring to. One of my favorite writers, Anne Dillard, wrote in The Writing Life:

How we spend our days is, of course, how we spend our lives. What we do with this hour, and that one, is what we are doing. A schedule defends from chaos and whim.

The concept is so obvious that it’s easy to forget. We often think that as long as we have long-term plans and goals, the meaningless tasks of the everyday don’t really get in the way. But, without intention, your days will fill with these tasks and activities, and then, all of a sudden, you’ve spent a whole life that can’t be gotten back doing essentially nothing you consider valuable.

Okay, so we can answer the question now. Intention, to me, is simply taking stock of the way in which you spend your day, so that you end up spending your life the way you intend.

This is why I opened with talking about the Bullet Journal method. The design of that system forces you to rethink what’s important on a day-to-day, month-to-month, and year-to-year basis.

It has you “migrate” tasks. When you do this, you ask yourself: is this vital? Is this important? Why?

If the answers are: no, no, I don’t know, then you remove it from your life. Don’t overthink it. As soon as you start making excuses, you start filling your life with stuff that doesn’t matter to you. This means you’re committing to living a life that isn’t meaningful to you.

Make sure you’re intentional about how you fill your day.

Let’s take a simple example. Maybe you’ve always wanted to learn to play the piano, but you’re too busy. Somehow the day just gets away from you. In your daily log, start tracking an hourly log to find out if you’re doing things that aren’t intentional.

You have some Twitter feeds that focus your news articles. This was meant to save time in the beginning. But now you realize a bad habit has formed where you go down the comments rabbit hole and the trending topics and on and on. The first hour of your day is shot, you’re filled with rage, and you haven’t even read any actual news articles yet.

You relax with some Netflix at night. But you started that one series that everyone loves. You just don’t get it. It adds no value to your life. But you keep going, because you might as well finish it now that you’ve started it.

And there was that time you wanted to know how hard it would be to make French Onion Soup from scratch, so you looked up a Youtube video on it. The sidebar recommended Binging with Babish and Alex French Guy Cooking and French Cooking Academy (all excellent, by the way).

All of a sudden, you’re subscribed to a dozen great cooking channels giving you hours of video every week. You feel compelled to at least watch a few, because, hey, you subscribed. There’s like, some sort of obligation there, right?

Maybe that last one wasn’t you (hint: it was me).

But you get the point.

Little things become habits really fast. Habits expand to fill those gaps in your day. If you were to ever stop and take stock of this, you’d find several hours a day you could have been learning piano. That Netflix series alone commits you to 60 meaningless hours of your life: gone forever. Sixty hours can get you through the beginner stage—easily.

Ask yourself, was that worth it? In twenty years, will you think it was worth it when you still haven’t even sat down at the piano, and now it feels too late? (It’s not too late; this is just another excuse.)

And maybe you’re thinking: but turning my brain off after a stressful day, watching something I don’t care about is exactly what I need to sleep better. Getting frustrated learning the fingering of a B-flat scale is the opposite of relaxing (seriously, that’s a messed up scale compared to literally all the others).

Great! You’ve answered the why. The Netflix series has value to you. You’re doing it with intention. Don’t cross that off your list. Maybe it’s that Twitter hour in the morning you can cut back on. Maybe right now isn’t the time to learn an instrument, and that’s okay, too.

Intention is what matters.

I’m not advocating everyone use this method.

This was actually just an extremely long-winded introduction to say I’m getting intentional about a few things I haven’t questioned for a while.

Every year, I put up a Goodreads tracker on the blog to show my progress on reading 52 books a year. For something like five years, I’ve read 60+ books a year. As a young, immature writer, this was hugely important.

As I learned about prose style, genre conventions, story structure, characterization, dialogue, etc, I was constantly testing it against a huge variety of books. I saw people who followed convention, people who didn’t, if it worked, and why.

In other words, when I started this practice, it was extremely useful. It had value to me. I did it with intention.

Recently, I’ve re-evaluated this practice. I’m getting rid of it. At this point, I find myself stressing about reading books I don’t enjoy just to check off an arbitrary counter. I’m obviously going to still read, but it will be more intentionally chosen and at whatever pace fits that book.

And let’s face it. I’ll probably still get through 40+ books a year. I’m just not going to have the stress associated with it anymore.

I get that I’m being a bit hypocritical or even egotistical with this, because I will continue to recommend other writers do the high volume method. I think most writers greatly undervalue the process of critical reading for the improvement of their writing. Quantity trumps quality until you reach a certain threshold.

Another intentional practice was mentioned in this post. I’m cutting out forced blog posts and only doing ones that I think add true value to the blog: no more stressing about “Examining Pro’s Prose” or “Found Clunkers.” All of my most read and liked posts were one-off things I was inspired to write anyway.

I’ll also use this time to announce next year’s reading series. I’m still getting value from the “Year of…” series, because I’m focusing on and learning about a very specific thing when doing it.

Ironically, in honor of intentional reading, I’m going to do the “Year of Required Reading.” I want to revisit some books I was required to read in high school and see what I think of them now. I also want to read some books required of students that I didn’t read to see if these modern additions are good ones.

I think it will be a fun series, though it might cause some comments from concerned parents if I think a required book just doesn’t live up to the hype.

So far I’ve only decided on To Kill a Mockingbird and something by Steinbeck (leaning toward Of Mice and Men but could be convinced to do The Grapes of Wrath with argument).

I’ve gotten intentional about a few other personal things that don’t need to be discussed here. But I thought I’d give a bit more explanation about some of the changes.

For those curious, here’s an overview of the Bullet Journal Method:

Are the Self-Publishing Gurus Out of Touch?

writing-1245534

Let me start by saying that I appreciate all the free content people in the self-publishing world put out. It’s quite generous and high quality. There must be ten or more hours of podcasts each week that come to my phone, not to mention blog posts and youtube and e-mails. That’s just me, meaning it’s only a small fraction of what’s actually produced.

If you wanted to be a student of this stuff, you’d have more class time than a full-time college student with no breaks for summer or winter. And there’s so much to learn that it could probably fill an entire major for a college student.

So, thank you to all those content providers.

That being said, I have two theories I’d like to present.

I’m going to call these people “gurus.” You either know who they are or don’t, but I don’t want to name names (google “podcasts for writers” or something if you really don’t know).

Every person I have in mind left their job at some point to be a full-time self-published writer. I think each of them makes at least six figures. This happened years ago for each of them.

Also, each of them has a “thing” they attribute it to: cover design that perfectly fits the genre, great copywriting on the product description, Amazon keywords and categories, Amazon marketing, Facebook marketing, writing to market, price surging, box sets, e-mail auto-responders, mailing list magnets, promotion services, etc.

Note, I’m not saying they push a well-rounded approach to improving each of the above; I’m saying their entire shtick is that once you get that one thing right, you will take off and have huge success.

This seems weird and crazy, so why would they, at least nominally, believe this? The cynical theory would be that they wanted to corner a niche in the market, so they figured this one thing out and pushed it hard as the expert.

I don’t believe this. I think there’s a much more obvious explanation. Like almost all successful authors, they started to gain traction, little by little. They were experimenting with all the above techniques to see if anything could get them a little edge.

As Gladwell explains in The Tipping Point, they just hit a critical mass of followers and readers at some point. This caused them to shoot from obscurity into prominence.

Human brains being what they are, these writers then attributed their success to the most recent major change they made rather than a natural progression to a tipping point. This is how we get people who are convinced you only need to get that one thing right to get success.

And this is fine as long as you don’t take that claim seriously when listening to them. The advice they give on that one thing is going to be pretty solid and useful. It will help keep you crawling upward toward your own tipping point.

I do think some people get frustrated when they work really hard at that one thing, and they only see marginal gains despite doing everything right.

Here’s my second theory based on the first theory. The gurus out there with the biggest platform have been wildly successful for years, and this actually makes them a bit out of touch with how things really are.

My theory is that they could launch a book to number one in their category by doing none of the advice they give: no ads, no pre-release, no notification of the mailing list, a sloppy and vague product description, a less-than-stellar cover, etc.

They have so many followers that news would spread of their release, and it would make thousands of dollars in the first month and be an Amazon bestseller.

If I’m correct about this, that means they actually have no idea if the advice they’re giving is correct. Don’t take this the wrong way. I’m not saying their advice is incorrect (quite the opposite)—but it’s just a fact of their prominence that they can’t know how much of an effect their advice has anymore.

I’m not sure if there’s much of a point to this post. I guess it’s that you shouldn’t put too much stock in any one thing you hear about self-publishing. Success is going to be a slow growth attributed to hundreds of things.

Writing a better product description might get you five more sales. Improving the cover: five more. An experiment with AMS ads: five more. Suddenly, these have added up to enough that you snatch a true fan that leaves a glowing review.

This review converts to twenty more sales, and the new fan starts you one person further along on the next book.

So it’s all interconnected and not traceable to any one thing. After a bunch of books of doing this, you find yourself starting with a hundred fans buying it on launch day getting you to bestseller status and days of free advertising in your genre. These “organic” sales translate to new fans, etc.

That’s the tipping point. It can look like a sudden spike in success, but it’s not the most recent marketing trick you tried. It’s dozens of things synergizing to create the effect.

So, most of all, take everything the gurus say with a grain of salt and don’t be afraid to experiment with your own ideas. What worked for these gurus several years ago may not be working in today’s market or your genre. Or they might. You’ll have to be the judge of that.

Surviving Upper Division Math

It’s that time of the year. Classes are starting up. You’re nervous and excited to be taking some of your first “real” math classes called things like “Abstract Algebra” or “Real Anaylsis” or “Topology.”

It goes well for the first few weeks as the professor reviews some stuff and gets everyone on the same page. You do the homework and seem to be understanding.

Then, all of a sudden, you find yourself sitting there, watching an hour-long proof of a theorem you can’t even remember the statement of, using techniques you’ve never heard of.

You panic. Is this going to be on the test?

We’ve all been there.

I’ve been that teacher, I’m sad to say, where it’s perfectly clear in my head that the students are not supposed to regurgitate any of this. The proof is merely there for rigor and exposure to some ideas. It’s clear in my head which ideas are the key ones, though I maybe forgot to point it out carefully.

It’s a daunting situation for the best students in the class and a downright nightmare for the weaker ones.

Then it gets worse. Once your eyes glaze over that first time, it seems the class gets more and more abstract as the weeks go by, filled with more and more of these insanely long proofs and no examples to illuminate the ideas.

Here’s some advice for surviving these upper division math classes. I’m sure people told me this dozens of times, but I tended to ignore it. I only learned how effective it was when I got to grad school.

Disclaimer: Everyone is different. Do what works for you. This worked for me and may only end up frustrating someone with a different learning style.

Tip Summary: Examples, examples, examples!

I used to think examples were something given in a textbook to help me work the problems. They gave me a model of how to do things.

What I didn’t realize was that examples are how you’re going to remember everything: proofs, theorems, concepts, problems, and so on.

Every time you come to a major theorem, write out the converse, inverse, switch some quantifiers, remove hypotheses, weaken hyphotheses, strengthen conclusions, and whatever you can think of to mess it up.

When you do this you’ll produce a bunch of propositions that are false! Now come up with examples to show they’re false (and get away from that textbook when you do this!). Maybe some rearrangement of the theorem turns out to be true, and so you can’t figure out a counterexample.

This is good, too! I cannot overstate how much you will drill into your memory by merely trying unsuccessfully to find a counterexample to a true statement. You’ll start to understand and see why it’s probably true, which will help you follow along to the proof.

As someone who has taught these classes, I assure you that a huge amount of problems students have on a test would be solved by doing this. Students try to memorize too much, and then when they get to a test, they start to question: was that a “for every” or “there exists?” Does the theorem go this way or that?

You must make up your own examples, so when you have a question like that, the answer comes immediately. It’s so easy to forget the tiniest little hypothesis under pressure.

It’s astounding the number of times I’ve seen someone get to a point in a proof where it looks like everything is in place, but it’s not. Say you’re at a step where f: X\to Y is a continuous map of topological spaces, and X is connected. You realize you can finish the proof if Y is connected.

You “remember” this is a theorem from the book! You’re done!

Woops. It turns out that f has to be surjective to make that true.

But now imagine, before the test, you read that theorem and you thought: what’s a counterexample if I remove the surjective hypothesis?

The example you came up with was so easy and took no time at all. It’s f: [0,1] \to \{0\} \cup \{1\} given by f(x) = 1. This example being in your head saves you from bombing that question.

If you just try to memorize the examples in the book or that the professor gives you, that’s just more memorization, and you could run into trouble. By going through the effort of making your own examples, you’ll have the confidence and understanding to do it again in a difficult situation.

A lesser talked about benefit is that having a bunch of examples that you understand gives you something concrete to think about when watching these proofs. So when the epsilons and deltas and neighborhoods of functions and uniform convergence and on and on start to make your eyes glaze over, you can picture the examples you’ve already constructed.

Instead of thinking in abstract generality, you can think: why does that step of the proof work or not work if f_n(x) = x^n?

Lastly, half the problems on undergraduate exams are going to be examples. So, if you already know them, you can spend all your time on the “harder” problems.

Other Tip: Partial credit is riskier than in lower division classes.

There’s this thing that a professor will never tell you, but it’s true: saying wrong things on a test is worse than saying nothing at all.

Let me disclaimer again. Being wrong and confused is soooo important to the process of learning math. You have to be unafraid to try things out on homework and quizzes and tests and office hours and on your own.

Then you have to learn why you were wrong. When you’re wrong, make more examples!

Knowing a bunch of examples will make it almost impossible for you to say something wrong.

Here’s the thing. There comes a point every semester where the professor has to make a judgment call on how much you understand. If they know what they’re doing, they’ll wait until the final exam.

The student that spews out a bunch of stuff in the hopes of partial credit is likely to say something wrong. When we’re grading and see something wrong (like misremembering that theorem above), a red flag goes off: this student doesn’t understand that concept.

A student that writes nothing on a problem or only a very small amount that is totally correct will be seen as superior. This is because it’s okay to not be able to do a problem if you understand you didn’t know how to do it. That’s a way to demonstrate you’re understanding. In other words: know what you don’t know.

Now, you shouldn’t be afraid to try, and this is why the first tip is so much more important than this other tip (and will often depend on the instructor/class).

And the best way to avoid using a “theorem” that’s “obviously wrong” is to test any theorem you quote against your arsenal of examples. As you practice this, it will become second-nature and make all of these classes far, far easier.

Critical Postmodern Readings, Part 4: Derrida

This series hasn’t been as rough as I’ve expected this far. The first few parts showed some of the classic postmodernists to be clearer and more prescient than many give them credit for. This changes all that. Derrida is rough going. I chose his most famous work, Differance, for today.

I couldn’t remember if I had read this before, and then I read the first paragraph. Oh, boy. It all came flooding back. Not only had I read this, I’d watched a whole lecture or two on it as well. I’m not sure it’s fair to be super critical of lack of clarity when reading a translation, but in this case, I think it’s warranted (more on that later).

The article begins by Derrida making up a new word: differance. It is phonetically related to “defer,” meaning to be displaced in time, and it is phonetically related to “differ,” meaning to not be identical.

So what is differance? Here’s the clearest explanation I’ve come up with after reading the opening paragraphs many, many times. Derrida wants to say that two things can only differ if there is a relationship between them for comparison. Anything that has such a relationship must have something in common. So:

We provisionally give the name differance to this sameness which is not identical.

Derrida then starts dancing around what he means. I don’t say this to be demeaning. His whole point is that language can’t describe the thing he is getting at, so he must dance around it to give you some idea about what he wants differance to mean.

To defer makes use of time, but differance is outside time. Differance produces the differences between the differences. It isn’t a word. It isn’t a concept. I’m going to describe differance as a “tool” or even “thought experiment” to get at Derrida’s particular form of deconstruction even though he doesn’t exactly say that.

Now I’m supposed to be doing “critical” readings of these texts, so I’ll admit this type of thing makes me feel a little funny. On the one hand, I’m okay with being a bit loose on definitions if the only point is to perform a thought experiment. On the other hand, I fear there will be a switch at some point where we start attributing a bunch of unknowable things without argument to a term that has such nonsensical properties as “being outside time.”

So, I want to carefully keep track of that.

Derrida moves on to the relationship between spoken and written language.

In French, “differance” and “difference” have the same pronunciation. He spends far too long talking about how difficult it will be to talk about this, since he’ll have to indicate verbally “with an a” each time (this paper was originally a talk given to the Societe Francaise de Philosophie).

He next spends quite a bit of time explaining some very precise etymological concerns about the word differance:

But while bringing us closer to the infinitive and active core of differing, “differance” with an a neutralizes what the infinitive denotes as simply active, in the same way that “parlance” does not signify the simple fact of speaking, of speaking to or being spoken to.

This type of thing is a pretty silly game. He freaking made the word up! There is no etymology or historical context to analyze. It’s a pure fiction about the word. I hear the Derrida defenders here saying that this is precisely the point, because every word is made up and we play an etymological game to understand them.

Maybe, but I don’t really see what’s gained by presenting that idea in this way.

Derrida then recaps some of Saussure’s structuralist ideas about language: the signifyer / signified distinction. The word “chair” is a mere symbol that signifies an actual chair. The connection is arbitrary. We could easily make up a word like “cuilhseitornf” and have it symbolize the exact same thing. (All that was the Saussure recap).

Derrida wants to now say that actually it’s not so simple as Saussure thinks.

Words don’t have a one-to-one correspondence to things (concepts, etc). In fact, meaning comes from being in a linguistic system, and the meaning of a word comes from understanding what the other words are not signifying. He wants to call this negative space “differance.” Again, I’m worried about how much we’re loading into this one made up word.

black and white book business close up
Photo by Pixabay on Pexels.com

But overall, if I clarify this point to a degree Derrida would probably hate, I’m somewhat sold on the concept.

Think about removing the word “chair” from the English language (i.e. a linguistic system). If you think about something that is different from all the remaining words, you’ll probably get something close to “chair,” because it’s partly defined by the differences between it and all the other words in the system. This seems an okay statement to make, if a little confusing as to its importance to theory.

Derrida introduces the concept of “trace” to make the above point. Basically the trace of a signifyer is the collection of all the sameness and differance left on it by the other words in the linguistic system.

Overall, I don’t get what real contribution this paper makes. To me, it is essentially a reiteration of Wittgenstein’s ideas about words in linguistic systems/games with a single, seemingly unnecessary, mystical layer that comes through the meta-concept of “differance.” Maybe if I were to read some of Derrida’s later work, it will become clearer why he needs this, but at this point I don’t get it.

Derrida is less confusing than I remember. He’s not hard to read because of obscurity or complex sentences or big words. He’s hard to read because he just meanders too much. There are entire pages that can be thrown out with nothing lost, because they are pure reiteration of minor tangential points.

Critical Postmodern Readings, Part 2: Finishing Lyotard

Last time we looked at the introduction to Lyotard’s The Postmodern Condition: A Report on Knowledge. That introduction already contained much of what gets fleshed out in the rest of the short book, so I’m going to mostly summarize stuff until we hit anything that requires serious critical thought.

The first chapter goes into how computers have changed the way we view knowledge. It was probably an excellent insight that required argument at the time. Now it’s obvious to everyone. Humans used to gain knowledge by reading books and talking to each other. It was a somewhat qualitative experience. The nature of knowledge has shifted with (big) data and machine learning. It’s very quantitative. It’s also a commodity to be bought and sold (think Facebook/Google).

It is a little creepy to understand Lyotard’s prescience. He basically predicts that multinational corporations will have the money to buy this data, and owning the data gives them real-world power. He predicts knowledge “circulation” in a similar way to money circulation.  Here’s a part of the prediction:

The reopening of the world market, a return to vigorous economic competition, the breakdown of the hegemony of American capitalism, the decline of the socialist alternative, a probable opening of the Chinese markets …

Other than the decline of the socialist alternative (which seems to have had a recent surge), Lyotard has a perfect prediction of how computerization of knowledge actually affected the world in the 40 years since he wrote this.

Chapter two reiterates the idea that scientific knowledge (i.e. the type discussed above) is different than, and in conflict with, “narrative” knowledge.

There is also a legitimation “problem” in science. The community as a whole must choose gatekeepers seen as legitimate who decide what counts as scientific knowledge.

I’ve written about why I don’t see this as a problem like Lyotard does, but I’ll concede the point that there is a legitimation that happens, and it could be a problem if those gatekeepers change the narrative to influence what is thought of as true. There are even known instances of political biases making their way into schools of scientific thought (see my review of Galileo’s Middle Finger by Alice Dreger).

Next Lyotard sets up the framework for thinking about this. He uses Wittgenstein’s “language game” concept. The rules of the game can never legitmate themselves. Even small modifications of the rules can greatly alter meaning. And lastly (I think this is where he differs from Wittgenstein), each speech act is an attempt to alter the rules. Since agreeing upon the current set of rules is a social contract, it is necessary to understand the “nature of social bonds.”

This part gets a little weird to me. He claims that classically society has been seen either as a unified whole or divided in two. The rules of the language games in a unified whole follow standard entropy (they get more complicated and chaotic and degenerate). The divided in two conception is classic Marxism (bourgeoisie/proletariat).

Even if it gets a bit on the mumbo-jumbo side through this part, I think his main point is summarized by this quote:

For it is impossible to know what the state of knowledge is—in other words, the problems its development and distribution are facing today—without knowing something of the society within which it is situated.

This doesn’t seem that controversial to me considering I’ve already admitted that certain powers can control the language and flow of knowledge. Being as generous as possible here, I think he’s just saying we have to know how many of these powers there are and who has the power and who legitimated that power before we can truly understand who’s forming these narratives and why.

In the postmodern world, we have a ton of different institutions all competing for their metanarrative to be heard.

Society is more fractured than just the two divisions of the modern world. But each of these institutions also has a set of rules for their language games that constrains them.  For example, the language of prayer has a different set of rules from an academic discussion at a university.

Chapters 7-9 seem to me to be where the most confusion on both the part of Lyotard and the reader can occur. He dives into the concept of narrative truth and scientific truth. You can already feel Lyotard try to position scientific truth to be less valuable than it is and narrative truth more valuable.

flight sky earth space
Photo by Pixabay on Pexels.com

Lyotard brings up the classic objections to verification and falsification (namely a variant on Hume’s Problem of Induction). How does one prove ones proof and evidence of a theory is true? How does one know the laws of nature are consistent across time and space? How can one say that a (scientific) theory is true merely because it cannot be falsified?

These were much more powerful objections in Lyotard’s time, but much of science now takes a Bayesian epistemology (even if they don’t admit to this terminology). We believe what is most probable, and we’re open to changing our minds if the evidence leads in that direction. I addressed this more fully a few years ago in my post: Does Bayesian Epistemology Suffer Foundational Problems?

… drawing a parallel between science and nonscientific (narrative) knowledge helps us understand, or at least sense, that the former’s existence is no more—and no less—necessary than the latter’s.

These sorts of statements are where things get tricky for me. I buy the argument that narrative knowledge is important. One can read James Baldwin and gain knowledge and empathy of a gay black man’s perspective that changes your life and the way you see the world.

Or maybe you read Butler’s performative theory of gender and suddenly understand your own gender expression in a new way. Both of these types of narrative knowledge could even be argued to be a “necessary” and vital part of humanity.

I also agree science is a separate type of knowledge, but I also see science as clearly more necessary than narrative knowledge.

If we lost all of James Baldwin’s writings tomorrow, it would be a tragedy. If we lost the polio vaccine tomorrow, it would be potentially catastrophic.

It’s too easy to philosophize science into this abstract pursuit and forget just how many aspects of your life it touches (your computer, the electricity in your house, the way you cook, the way you get your food, the way you clean yourself). Probably 80% of the developed world would literally die off in a few months if scientific knowledge disappeared.

I’ll reiterate that Lyotard thinks science is vastly important. He is in no way saying the problems of science are crippling. The above quote is more in raising narrative knowledge to the same importance of science than the devaluing of science (Lyotard might point to the disastrous consequences that happened as a result of convincing a nation of the narrative that the Aryan race is superior). For example, he says:

Today the problem of legitimation is no longer considered a failing of the language game of science. It would be more accurate to say that it has itself been legitimated as a problem, that is, as a heuristic driving force.

Anyway, getting back to the main point. Lyotard points out that problems of legitimating knowledge is essentially modern, and though we should be aware of the difficulties, we shouldn’t be too concerned with it. The postmodern problem is the grand delegitimation of various narratives (and one can’t help but hear Trump yell “Fake News” while reading this section of Lyotard).

Lyotard spends several sections developing a theory of how humans do science, and he develops the language of “performativity.”

It all seems pretty accurate to me, and not really worth commenting on (i.e. it’s just a description). He goes into the issues Godel’s Incompleteness Theorem caused for positivists. He talks about the Bourbaki group. He talks about the seeming paradox of having to look for counterexamples while simultaneously trying to prove the statement to be true.

I’d say the most surprising thing is that he gets this stuff right. You often hear about postmodernists hijacking math/science to make their mumbo-jumbo sound more rigorous. He brings up Brownian motion and modeling discontinuous phenomena with differentiable functions to ease analysis and how the Koch curve has a non-whole number dimension. These were all explained without error and without claiming they imply things they don’t imply.

Lyotard wants to call these unintuitive and bizarre narratives about the world that come from weird scientific and mathematical facts “postmodern science.” Maybe it’s because we’ve had over forty more years to digest this, but I say: why bother? To me, this is the power of science. The best summary I can come up with is this:

Narrative knowledge must be convincing as a narrative; science is convincing despite the unconvincing narrative it suggests (think of the EPR paradox in quantum mechanics or even the germ theory of disease when it was first suggested).

I know I riffed a bit harder on the science stuff than a graduate seminar on the book would. Overall, I thought this was an excellent read. It seems more relevant now than when it was written, because it cautions about the dangers of powerful organizations buying a bunch of data and using that to craft narratives we want to hear while deligitimating narratives that hurt them (but which might be true).

We know now that this shouldn’t be a futuristic, dystopian fear (as it was in Lyotard’s time). It’s really happening with targeted advertising and the rise of government propaganda and illegitimate news sources propagating our social media feeds. We believe what the people with money want us to believe, and it’s impossible to free ourselves from it until we understand the situation with the same level of clarity that Lyotard did.