A Mind for Madness

Musings on art, philosophy, mathematics, and physics

Leave a comment

Verbs of Being and the Past Progressive Tense

Everyone gets told to “show, don’t tell” at some point in their education. Today I want to talk about an equally important (and related) piece of advice that rarely gets taught. In fact, many people go through their whole lives and don’t even notice this. Until you’re told to look for it, you might miss it entirely, because in speech it sounds so natural.

If you write something in past tense, then you will undoubtedly use the word “was.”

Example: I was in my room reading a book, when there was a knock on my door.

Take a moment and answer the question: does this sound fine to you? Before we begin, I need to point out that sometimes “was” is the best option. There is no need to blindly expunge all verbs of being out of your writing. The thing I look for is whether something is happening. “To be” is static. If someone is doing something in the sentence, why not make them do it with the appropriate verb? In other words, make the action happen!

First off, replacing “was” with the active verb brings the action of the sentence more vividly to the reader’s imagination (i.e. it shows the action rather than telling the action). Also, it naturally varies the sentences. If most of your sentences read “blah was blah,” then it gets boring.

A related issue is the past progressive tense. This takes the form “was verb-ing.” I was sitting in my room. I was reading a book. Then I heard that someone was knocking on my door. If you are already on the lookout for “was,” this should be no problem to detect. Sometimes it is subtle to identify. In the example, “I was in my room reading” uses past progressive, but I split the infinitive to make it harder to catch. Rearranging makes this clearer: I was reading in my room.

The past progressive suffers from the same types of problems as before. It gets repetitive, and it implies inaction.

There are two main ways I use to get rid of these constructs. First, I identify the active verb. Second, I add details. Strangely, these passive expressions almost always appear when you haven’t sufficiently described the scene. In the first part of the example sentence, the active verb was “read.” In the second part, the active verb was “knock.” One way to fix the sentence is to add some detail and use the right verbs.

Edited example: I sat down in my room, excited to crack open Annie Dillard’s The Living, when someone knocked on my door.

Part of producing clear, vivid writing involves a long, tedious process of weeding these constructs out wherever possible. There exists unprofessional writing (like this blog post) which maybe goes through one or two revisions. But most of professional quality writing is rewriting and editing. I think it is easy to forget how dedicated you have to be to get everything just right. Your first draft will have these problems, and that’s fine. You can’t be worried about that upfront or else you will never write anything.

I want to show an example to illustrate that great writers are constantly aware of the problems listed in this post. I love Annie Dillard and I am reading The Living right now. I’ll use a random number generator to pick a page (to emphasize I didn’t tailor the example) and give you the first full paragraph.

Eustace and Clare worked together often, and hunted the hills and fished the river together, and smoked their pipes, resting their eyes on the wide river or the drowsing fields. Resolution burned in Eustace and made him grave and sincere; gaiety and hopefulness animated Clare and blew him about. Clare’s view that a man could enjoy this life eased Eustace’s urgency to succeed, and moderated his mental habit of measuring himself, his material gains and losses, against the doubt and dread of his parents, and Minta’s parents …

(OK, I ended it early because it was long). Notice the verbs: worked, hunted, fished, smoked, resting, burned, made, animated, blew, enjoy, eased, moderated, … There isn’t one verb of being or past progressive tense in there. A sad state of affairs is that sometimes a mark of great writing is that you don’t notice it. Now that I’ve taken the time to examine a paragraph like this, I can’t even imagine the effort that went into it. Yet most people, including myself, will normally read through it without a second thought.

I thought hard about what to contrast this with, because I don’t have much on my bookshelves that I could be confident made the errors I listed above. I chose Ayn Rand’s Atlas Shrugged.

There were glaring lights inside, a few tired salesgirls among a spread of deserted counters, and the screaming of a phonograph record being played for a lone, listless customer in a corner. The music swallowed the sharp edges of Taggart’s voice: he asked for paper tissues in a tone which implied that the salesgirl was responsible for his cold. The girl turned to the counter behind her, but turned back once to glance swiftly at his face. She took a packed, but stopped, hesitating, studying him with peculiar curiosity…She gasped like a child at a burst of firecrackers; she was looking at him with a glance …

I have to say, it isn’t terrible. The first sentence is in past progressive, and we get one “was” in the middle. I couldn’t resist adding the next sentence after the dialogue which had a “was looking.” That first sentence could be altered, and then it would be a less awkward tense change to the simple past tense:

The lights glared inside. He saw a few tired salesgirls among a spread of deserted counters and heard the screaming of a phonograph record being played for a lone, listless customer in a corner.

I admit it isn’t much better, but that’s part of the point of this post. Sometimes it takes a great deal of effort to get even one sentence to be clear and active and to fit stylistically with everything else. The difference between a good writer and a great writer is often that the great writer has the stamina and dedication to question the strength of every word and sentence. They don’t settle for fixing most of them. Anyway, I have no idea how this turned into a pep talk being great, so I’ll end here.

Leave a comment

Thoughts on Permadeath

I hate to return to this topic so soon, but it has been awhile since I’ve blogged and I’ve been reading a bit about it. Back here I blogged about why playing roguelike games can be a gratifying and important experience if you’ve never tried it before. I want to step back from the genre in general and focus in on just one feature.

Recall that permadeath (shorthand for “permanent death”) is a game mechanic where once you die you must start the whole game over again. Even within hardcore roguelike gamers and game developers there has been a lot of controversy surrounding this mechanic. Isn’t it unfair if someone puts in 12 hours of work, and then a random event outside of their control makes them start all over again? Right at the very end. It seems punishingly unfair.

Well, let’s consider a thought experiment which I think exemplifies the purpose of permadeath. Imagine you are going to a job interview, but you live in an alternate universe where you can set a clock and time travel back to that point in time. You set the clock right before the interview. You go into the interview. You are way too passive and modest and some other person that exhibited more ambition gets the job.

No big deal. You travel back in time, and you take the opposite approach. You go all out with your risky self-promotion. You seem like a jerk that won’t fit the team, so you don’t get the job. You travel back in time. Each time you try new things based on the feedback you got from your last attempt. You never feel any pressure to get it right, because you can keep trying until something works. There’s no real penalty for doing poorly. Finally, you get the job.

This is exactly how save points work in a game. Maybe in real life you think this type of thing would be great, but notice what it does to a game. All of a sudden, a challenge presented in the game that you were supposed to think about and develop the skills to get past no longer functions in that way. You lose all motivation to try to get it right the first time. It is no longer challenging, because you can keep repeating it and seeing what went wrong until you calibrate a success. You are virtually guaranteed to be successful.

This type of success can feel mildly rewarding, because you still made progress and got better until you were good enough to get through that part. But you have no sense of real danger or excitement or real accomplishment when you play this way. What does success even mean if there is no risk of failure? You could try act as if you didn’t save, but it won’t create the same effect. The point of the permadeath mechanic is to get your blood pumping with excitement that if you make one wrong move you could lose everything. It is far more exciting to be living on the edge like that.

Permadeath makes you take your time and plan your strategies carefully. You can’t just blindly spam a bunch of attempts until something works. When you get good enough at the game to succeed, it is a real success. Puzzles and challenges are actually puzzles and challenges in the sense that you need to solve them to get through them. There’s no guess and check.

There are of course varying degrees of save points, but in the extreme scenario above, I think the case is clear. It is hard to get the gamer to experience any sense of danger or reality with excessive save points. On the other extreme, permadeath tends to elicit anxiety and fear just walking around an empty corridor. In some cases, this may not be desirable for your game.

I’m not saying one way or another is better or worse. I just wanted to explain what I think the game mechanic’s purpose is. Sometimes it is quite inappropriate. There have been tough platformers that I never would have gotten very far in if they had permadeath. This is to say that even though I find permadeath to be a very rewarding way to play a game, it doesn’t serve its purpose in some genres.

To tie this back to roguelikes, I think this is really the perfect genre for permadeath. The reason is that roguelike games tend to have a massive amount of randomly generated content. Your starting stats, items, character, etc are all random. The rooms and level layouts are random. The enemies you fight are random.

This means that when you die and start all over again, you aren’t just repeating the exact same thing over and over again. Each playthrough gives you a totally new game. Permadeath would be quite tedious and obnoxious if you had to keep playing the same content over and over again. I think that would be a misuse of it. Save points serve a good purpose in that case. If you’ve demonstrated you can get through a certain part, why make the player do it again if it is exactly the same?

If you want to see other opinions, here is a 203 comment discussion on the topic.

Leave a comment

Pushing Children to Early Career Choices

I’m staying with my in-laws right now, and they watch America’s Got Talent. As the show went on, I started to get more and more disturbed by the things some of these 11- and 12-year-olds were saying. They were saying things like, “I’ve been working my whole life for this moment.” Or they would say, “This is important for my career.” I don’t remember the exact phrases, but they were acting like the entire rest of their life was planned for them already.

I don’t like the pressure these children are feeling. They think that they will let everyone in their lives down if they don’t use their talent to become super famous. They feel like they will somehow be a failure at anything less than winning this thing. I know because I’ve been there.

This means that I also know that it might not be the parent’s fault at all. Sometimes it doesn’t matter how many times a parent says that it is okay to do whatever you want, or that they will support you no matter what career you want. I can say from experience that words can’t easily overcome all the subtle external environmental factors.

When I was young I started gymnastics and excelled quickly at it. In high school I competed nationally and even sometimes internationally. I wasn’t the best at that level, but I was still fairly young and I won the New York state championship one year. Here were the types of pressure I felt that were not parental, but my parents probably had no idea they existed. If I were to quit, then I would let my coach down, my teammates down, the family members that kept telling me I would be in the Olympics someday, random people in my hometown that thought I would bring the town fame, and on and on.

This type of pressure that no one even realized they were exerting (they were expressing how proud they were of me and how excited they were for my future) kept me doing it for much, much longer than I wanted to. Luckily, I got up the courage to quit at some point. It was a total shock to everyone, because they couldn’t see that I had grown to hate it. They confused me being good at it with me being passionate about it. To this day I still don’t “miss it” like everyone assumes.

I understand that there is a fine line that must be walked which is probably very difficult for parents of talented children. If your child shows promise in singing, how far should you let it take over their life? You don’t want to discourage them from practicing if they actually do love doing it. You don’t want to discourage them from working hard. You don’t want them to quit if they are merely having a bad week or month. Let’s face it. Sometimes it is tedious, discouraging, hard work to get better at something. It is easy to confuse this with disliking it.

You also don’t want to force them into a career in it before they are ready to make that decision (which honestly I don’t see how you could be ready for that decision until at least college age). All these things are very difficult to tell without being inside the person’s head. Even asking them bluntly how they feel about it might not get an honest response if they are feeling a lot of pressure. I know. I used to lie about it.

There is also the issue that people think that if you aren’t a famous musician (or mathematician, etc) by a certain age, then you will never be famous. First, that is false. More importantly, I’m not sure why being famous is thought of as a measure of success. Heck, I’m not even sure being able to make a living at music should be the sole measure of success. There are lots of people who enrich their lives with their talents as a hobbies, and this is a far better path for them. Making it a career would have made them hate it.

I don’t really have any proposed solution to this (or reason for writing about it other than that it was on my mind today), and I know it is a tough issue. Still, I felt quite disturbed to hear these children express these concerns that I used to have. I wish I could tell them that they can still do whatever they want, and what they want might be something they haven’t even heard of yet. You might disappoint some people when they find out you stopped pursuing it as a career, but you can’t let that dictate your life.

Leave a comment

Preventing Fragmentation with Paging

Since I’ve been on a bit of a computer science kick lately (I even made a new “computer science” category to tag posts with) I thought I would do a follow up to my post on fragmentation. I’ll summarize it so you don’t have to go reread it. If you try to put stuff in memory in the most naive way by just looking for the first place it fits, then you end up with huge amounts of fragmentation (wasting around 33% of your available space). I also said that no modern operating system would do it this way.

To appreciate how great this idea I’m about to describe is, just take a moment and try to think of a different way of doing it. When I learned about this, I actually tried and was stumped. Any algorithm I thought of was very similar to the first-fit one and left lots of external fragmentation. My hint is that the solution has no external fragmentation at all, and it requires a radically different way of thinking about the situation.

As far as I know, all modern operating systems solve the problem using something called paging. The idea is actually really simple, even though the implementation can be quite confusing at first. The idea is that you use something called a page table which keeps track of where everything is located. This allows you to physically break everything up so that they all can fit right next to eachother and you’ll have no external fragmentation.

Even though one chunk could then be spread all over the place physically, the page table allows you to logically think of it as one chunk, because you just look at the page table and read the addresses contiguously. Let’s just look at an example. A page is just some fixed small size. Let’s suppose we want to put something that takes three pages of space into memory.

Maybe there’s not 3 consecutive pages left in memory, but there’s still a room if you don’t look consecutively. The first 3 pages are taken, but then there are two available places, then another is taken, but then there is room for the last page:

In some sense this is still the first-fit algorithm, but now the chunks are all the same size and we break everything into those chunks first. We then just start sliding the pages in wherever they fit without ever having to worry about fitting the whole thing together.

You can use the page table to look up where the pages were placed in physical memory. It is really clever, because as a user you can still think of things as single units, but you never (externally) waste any space. There will still be a small amount of internal fragmentation which comes from things not being perfect multiples of the sizes of pages, but this will be minuscule comparatively.

Anyway, that’s all for today. I didn’t want to leave you all hanging. I’m sure you were all dying to know how your computer got around the problem I raised in that post.


Elements of Writing that Annoy Me

I’ve been reading a lot lately and critically thinking about what makes good or bad writing. Maybe I shouldn’t use those terms. I’ve been thinking a lot about things that I like and would like to imitate and things that I don’t like that I would like to avoid. In general, I’ve been making a concerted effort to become a better writer. One useful exercise is to formulate your thoughts precisely on certain elements of writing. That is what this post will be.

I recently read Q by Evan Mandery, so I’m going to use it to illustrate my points. I am only picking on it, because it is the most recent thing I’ve read. I see these issues to some degree or another in most things I read.

1. Meandering. Maybe this annoys me so much in other people’s writing because it is the hardest thing for me to overcome. When thoughts are flowing from your brain to the page it all makes logical sense. It is all connected. These connections let you explain all sorts of interesting things. Unfortunately, good, clear writing stays on topic. It stays focused. During revision, most of what you write should be deleted. You think the extraneous topics are somewhat related, bring color, bring humor, or whatever post-justification you come up with, but almost always they don’t.

In Q, the narrator constantly lets his thoughts go off on tangents. A few of them are humorous, critiques of modern culture. Most are not. Here’s an example. A character refers to a cemetery, “I recommend Cypress Hills in Queens.” The next segment begins, “The cemeteries of New York are hidden treasures. In 1852, …” It starts a brief history of cemeteries in New York. A few pages later we get a brief history of the Cayman islands as a tax shelter. A few pages later we get something else.

There seem to be certain keywords that come up in the narration which signal a page or two long digression on those words. It gets tiresome. I think there are at least two ways to meander in an effective way. The first is à la Don DeLillo. In particular, White Noise has these same types of digressions, but they are always very focused on the main theme of the book. This makes the book read like a well-chosen collection of essays tied together by a narrative arc. The digressions are very short and humorous. It is stylistically a similar idea, but can hardly be called meandering because of how directed it is.

The other possible way to make these meandering cultural references in a way that doesn’t annoy me is to incorporate them directly into your writing style. Pynchon was a master at this in Gravity’s Rainbow. He used a dense, complicated style in order to seamlessly incorporate the digressions right into the sentences. What I’m trying to say is that meandering to other topics can be done if you are extremely careful with how you do it. It can be an effective technique when handled properly. It is almost never handled properly.

2. Knowing you’ve done something wrong or questionable and justifying it in the work itself. It is one thing if you have a plot hole or you are meandering and don’t notice it. These things happen. It is another if you try to justify these things through your writing, because having characters comment on it tells the reader that you were aware of the problem. If you were aware of the problem, then you should just fix it! Don’t justify it to me.

This happens twice in Q. I won’t give the plot away, but I will say that the main character’s future self comes back and tries to convince him to not marry the love of his life for certain reasons. When you read this, you will immediately think of several less extreme ways around the problem. It doesn’t make sense why he must resort to not marrying her.

Immediately after this, the two main characters discuss an episode of The Twilight Zone in which the main drama could have been avoided by much simpler means. The one character is annoyed at the simple solutions and points this out. The other one tries to justify why the episode was made with these plot holes. The author has used the characters to tell the reader: I know you’ve realized that this is an unreasonable scenario, but this is why I’ve decided to leave it. It annoys me. If you realize it is unreasonable, then change it.

Later on a similar thing happens. The main character writes a story. It gets criticized, “These random tangents come across as quite flippant in your writing and, I must say, they are similarly off-putting on an interpersonal level…You should work to curtail it. Being random is utterly unbecoming.” Sound familiar? The main character then tries to justify why they are there. It is the author explaining that he has a bunch of random tangents in the book, and he realizes they don’t really work. But here’s why he kept them. Don’t justify the mistake. Fix it.

3. Pacing. There may be a few extraordinarily talented writers who can do this on their own, but for the most part pacing must be determined by readers. Even still, it is probably subjective beyond all hope. If someone reads what you wrote and thinks a part dragged on too long or it ended too quickly or they wanted to hear more about something, then they are probably onto something. If you wrote it, then you are probably too close to the material to understand what the reader is experiencing.

It is very difficult to do, but you need some readers who you trust and then actually trust their judgment. To me, Q‘s pacing is all wrong. The first visitation and major decision is the first half of the book. Then something like 10 more visitations happen in the second half. Some of them are only a few pages long. I actually really, really like the second half of the book, because it develops the major theme. Proper pacing could have saved this book for me. It would go from fair to excellent if the first half were cut down and the visitations afterwards were expanded to be roughly the same size.

To point out that I understand what the author was doing and am not an idiot, I’ll explain the theory behind why he did what he did. There was no reason to drag out the later visitations any longer than a few pages to get to the main theme of the book. By making each visitation shorter and shorter, it brought about a sense of acceleration in pacing to the end. In theory this idea is exactly what I would have done. An example of a masterful execution of this technique would be Tolstoy’s The Death of Ivan Ilyich.

In the actual finished product, I felt like the flawed (for the reasons mentioned above) first visitation being the longest made the first half drag a lot. The second half went by too quickly for me, because those stories were less flawed and more interesting. They were also how the main theme was developed. You could still have some decrease in length of the visitations as you get later in the book to create that sense of driving towards the end.

The sudden pacing shift made the first and second half seem like totally disjoint and unrelated works. My proposal wouldn’t change any content, but would make the whole thing feel more balanced and would have prevented me from almost giving up several times in the first half.

I’m sorry to be down on a book I actually ended up enjoying. It was fun and had me laughing out loud at times, and the lesson to take away was excellent. My wrath would have come down on basically whatever book I had most recently read. I hope to make a habit of these posts, because it is quite the clarifying exercise to write my thoughts on these issues down. I can already think of 3 more without any effort.

Leave a comment

How Hard is Adding Integers for a Computer?

In our modern world, we often use high level programming languages (Python, Ruby, etc) without much thought about what is happening. Even if we use a low level language like C, we still probably think of operations like {1+1} yielding {2} or {3-2} yielding {1} as extremely basic. We have no appreciation for how subtle and clever people had to be to first get those types of things to work.

I don’t want to go into detail about how those actually work at the machine level, because that would be a pretty boring post. I do want to do a thought experiment that should bring up some of the issues. Suppose you want to make a new programming language to see how one does such a thing. You think to yourself that it will at least be able to add and subtract integers. How hard could that be?

To play around a little you decide you will first make an integer take up 4 bits of memory. This means when you declare an integer {x = 1}, it gets put into a space of size {4} bits: {0001}. Each bit can be either a {0} or a {1}. Things seem to be going great, because you are comfortable with binary notation. You think that you’ll just take an integer and write its binary representation to memory.

Just for a quick refresher, for 4 bits this means that your integer type can only encode the numbers from {0} to {15}. Recall that you can go back to base {10} by taking each digit and using it a coefficient on the appropriate power of {2}. Thus {0101} would be {0\cdot 2^3 + 1\cdot 2^2 + 0\cdot 2^1 + 1\cdot 2^0 = 5}.

Things are going well. You cleverly come up with an adding function merely by manipulating bits with allowable machine level operations coming directly from the operating system. Then you test {15 + 1}. Woops. You overflow. This is the first problem, but it isn’t the interesting one I want to focus on. Even if you have a well defined integer type and a working addition function, this doesn’t mean that adding two integers will always result in an integer! There is an easy rule you think up to determine when it will happen and you just throw an error message for now.

Now you move on to subtraction. Oops. You then realize that you have no way of representing negative numbers with your integer type. If you haven’t seen this before, then you should really take a few moments to think about how you would do this. The “most obvious” solution takes some thought, and turns out to be terrible to use. The one that people actually use is quite clever.

The first thing you might try is to just reserve either the first or last bit as a way to indicate that you are positive or negative. Maybe you’ll take {1xxx} to be negative and {0xxx} to be postive. For example, {0001} is {1} and {1001} is {-1}. First, notice that this cuts the number of postive integers you can represent in half, but there isn’t a way around this. Second, there is a positive and negative “0” because {1000} is supposedly {-0}. This will almost certainly cause a bigger headache than it is solves.

Lastly, that adding function you wrote is meaningless now. Fortunately, people came up with a much better solution. It is called two’s complement notation. We just weight the most significant bit with a negative. This means that {1010} would convert to {-2^3 + 0\cdot 2^2 +2^1 + 0\cdot 2^0 = -6}. This makes all the numbers that start with 1 negative like our earlier example, except there is only a single 0 now because {1000} is {-8} (our most negative integer we can represent).

Moreover {3-2 = 3 + (-2) = 0011 + 1110 = 0001 = 1} (if we chop off overflow, yikes). So plain old addition works and gives us a subtraction. Except, sometimes it doesn’t. For example, take {0111 + 0001 = 1000}. This says that {7+1= -8}. This is basically the same overflow error from before, because {8} is not an integer that can be represented by our 4 bit type. This just means we have to be careful about some edge cases. It is doable, and in fact, this is exactly what C does (but with 32 bit integers).

Just to wrap up, it seems that to make this hobbled together solution of merely representing and adding integers work we want to make sure that our language is strongly typed (i.e. we know exactly how big an integer is so that we know where to place that leading 1 indicating a negative and the type isn’t going to change on us).

Just consider if we tried to prevent overflow issues by making a “big integer” class that is {8} bits instead of {4}. We try to do {3-2} again, and upon overflow we switch the type to a big int. We would then get {3-2= 0001 \ 0000} which is {16}. This means we have to be really careful when dealing with multiple types and recasting between them. It seems a minor miracle that in a language like Ruby you can throw around all sorts of different looking types (without declaring any of them) with plus signs between them and get the answer you would expect.

That brings us to the main point of this post. It is a really, really good thing we don’t have to worry about these technicalities when writing programs. The whole point of a good high level language is to not be aware of all the tedious machine level computations going on. But this also means that most people have no appreciation for just how complicated something as simple as adding two integers can be (of course, this is all standardized now, so you probably wouldn’t even worry about it if you were writing your own language from scratch).

Leave a comment

Thoughts on In the Beginning was the Command Line

I’ve been meaning to read Neal Stepheneson’s In the Beginning was the Command Line for quite some time. It pops up here and there, and I’ve never been able to tell what was in it from these brief encounters. Somehow (OK, I was searching for NetHack stuff) I ran into Garote’s page (Garrett Birkel). He is a programmer and in 2004 wrote some comments in with the full original 1999 essay here. This gave a nice 5 year update to Stephenson’s original. It has been 10 years since that update. I don’t plan on doing much commentary, but I did want to record thoughts I had as they came up.

In the first half of the essay there is a long diatribe on American culture. My first large scale thought is that this should be removed. There are some really nice clarifying analogies throughout the essay, but this is not one. It adds probably 1000 words and layers of unnecessary confusion. A good analogy is the Morlock and Eloi from The Time Machine as the types of people using computers. It doesn’t take long to describe and illustrates the point. Having a huge political discussion about TV ruining people’s brains and being easily distracted by shiny objects is an essay in and of itself and not a useful discussion for the main points.

Speaking of main points, I should probably try to distill them. One is that graphical user interfaces (GUIs) as opposed to the original command line prompt were created for the wrong reason. This led to a lot of bad. It is unclear to me from the essay whether or not this is supposedly inherent in the GUI concept or just because of the original circumstances under which they were made. Another main idea is the history of Linux. It is well-done, but you can find this type of thing in a lot of places. The more interesting historical description was of BeOS, because this is far less known. The last main argument is about the monopoly that proprietary OSes have in the market. We’ll get to that later.

Notes (I don’t distinguish whether the quotes are of Garote or Stephenson, sorry):

“All this hoopla over GUI elements has led some people to predict the death of the keyboard. Then came the Tablet PC from Microsoft — and people have been complaining ever since, because most things that you would use a computer for, still involve letters and numbers.”

This was without question the right thing to say in 2004. Ten years later our move to tablets and phones as our primary computers is so close to being a majority that Microsoft revamped Windows as if no one uses a laptop or desktop anymore. It was widely criticized as a mistake, but I think it says a lot about how far we’ve come since 2004. It may not have been a mistake if they waited 2 more years.

“Even before my Powerbook crashed and obliterated my big file in July 1995, there had been danger signs.”

It is interesting to me to see how much emphasis is put on “losing files” throughout this essay. It seems a point that the 2004 comments still agrees with. I certainly remember those days as well. I’m not saying it doesn’t happen now, but “cloud computing” (which I now just call “using a computer”) is so pervasive that no one should lose work anymore. I could format my hard drive and not lose anything important because my work is stored all over the world on various servers. It would take a major, organized terrorist-level catastrophe to lose work if you take reasonable precautions. I have a 2 TB external storage device to do regular back-ups on, but it just feels a waste now.

“Likewise, commercial OS companies like Apple and Microsoft can’t go around admitting that their software has bugs and that it crashes all the time, any more than Disney can issue press releases stating that Mickey Mouse is an actor in a suit.”

It is interesting that even back in 1999 this was clear. The proprietary OSes had to keep up appearances that they were better than the free alternatives. Despite the marketing that you were actually buying quality, the OSes you paid for were bigger, slower, had fragmentation issues, were more likely to crash, and got viruses. The Windows bloat is so big now (over 20 GB!) that older laptops will waste half their space just for the OS. In effect, the OS you paid for was worse than Linux in every way except for the intuitive GUI and a few select professional-grade programs.

In 2014, the GUI issue is fixed. The switch from Windows 7 to Ubuntu is less jarring and more intuitive than the switch from Windows 7 to Windows 8. I claim even the most computer illiterate could handle some of the modern Linux distributions. Now you basically pay for the ability to pay for a few high quality programs. There are certain professions where this is worth it (mostly in the arts, but certainly use Linux for work in STEM areas), but for the average person it is not. Now that WINE is better, you can even run those specialized Windows programs easily in Linux.

The next section is an anecdote about how difficult it was to fix a bug in the Windows NT install on one of Neal Stephenson’s computers versus the simplicity of getting help with Debian. This whole section is basically making the argument that a for-profit software or OS must maintain the illusion of superiority to get people to buy it. This means they hide their bugs which in turn makes it hard to fix. Open source encourages bugs to get posted all over the internet. Thousands of people see the bug and have an opportunity to try to fix it (as opposed to the one, possibly incompetent customer service rep you tell). The solution, when found, usually very quickly, will be posted for all to see and will be incorporated into the next release.

I’m not sure if the cause/effect that is proposed is really the reason for this (he admits later that there are now bug reports for Microsoft and Apple, but they are very difficult to find/use), but it matches my own experiences as well. I only note this here, because I often hear that you are “paying for customer service” or “support” when you choose Windows over Linux. For the above reasons I just don’t believe it. If the Linux community somehow stopped being so darn helpful so quickly, then maybe this would be a point in favor of proprietary software. I don’t see that happening any time soon.

The last part of the essay on the monopoly might be a little outdated. When I buy a computer and Windows comes as part of the package, I feel a little cheated because the first thing I do is delete it. Why did I just pay for something that I was going to immediately delete? The reason this is outdated is because in 2014 you can find “Linux boxes” that come with Linux installed in place of Windows. They are harder to find, so you don’t have as many choices, but this is a big improvement from 2004 where 100% of Dell (or whatever) computers came with Windows.


Get every new post delivered to your Inbox.

Join 190 other followers