A Mind for Madness

Musings on art, philosophy, mathematics, and physics


Leave a comment

Preventing Fragmentation with Paging

Since I’ve been on a bit of a computer science kick lately (I even made a new “computer science” category to tag posts with) I thought I would do a follow up to my post on fragmentation. I’ll summarize it so you don’t have to go reread it. If you try to put stuff in memory in the most naive way by just looking for the first place it fits, then you end up with huge amounts of fragmentation (wasting around 33% of your available space). I also said that no modern operating system would do it this way.

To appreciate how great this idea I’m about to describe is, just take a moment and try to think of a different way of doing it. When I learned about this, I actually tried and was stumped. Any algorithm I thought of was very similar to the first-fit one and left lots of external fragmentation. My hint is that the solution has no external fragmentation at all, and it requires a radically different way of thinking about the situation.

As far as I know, all modern operating systems solve the problem using something called paging. The idea is actually really simple, even though the implementation can be quite confusing at first. The idea is that you use something called a page table which keeps track of where everything is located. This allows you to physically break everything up so that they all can fit right next to eachother and you’ll have no external fragmentation.

Even though one chunk could then be spread all over the place physically, the page table allows you to logically think of it as one chunk, because you just look at the page table and read the addresses contiguously. Let’s just look at an example. A page is just some fixed small size. Let’s suppose we want to put something that takes three pages of space into memory.

Maybe there’s not 3 consecutive pages left in memory, but there’s still a room if you don’t look consecutively. The first 3 pages are taken, but then there are two available places, then another is taken, but then there is room for the last page:

In some sense this is still the first-fit algorithm, but now the chunks are all the same size and we break everything into those chunks first. We then just start sliding the pages in wherever they fit without ever having to worry about fitting the whole thing together.

You can use the page table to look up where the pages were placed in physical memory. It is really clever, because as a user you can still think of things as single units, but you never (externally) waste any space. There will still be a small amount of internal fragmentation which comes from things not being perfect multiples of the sizes of pages, but this will be minuscule comparatively.

Anyway, that’s all for today. I didn’t want to leave you all hanging. I’m sure you were all dying to know how your computer got around the problem I raised in that post.


2 Comments

Elements of Writing that Annoy Me

I’ve been reading a lot lately and critically thinking about what makes good or bad writing. Maybe I shouldn’t use those terms. I’ve been thinking a lot about things that I like and would like to imitate and things that I don’t like that I would like to avoid. In general, I’ve been making a concerted effort to become a better writer. One useful exercise is to formulate your thoughts precisely on certain elements of writing. That is what this post will be.

I recently read Q by Evan Mandery, so I’m going to use it to illustrate my points. I am only picking on it, because it is the most recent thing I’ve read. I see these issues to some degree or another in most things I read.

1. Meandering. Maybe this annoys me so much in other people’s writing because it is the hardest thing for me to overcome. When thoughts are flowing from your brain to the page it all makes logical sense. It is all connected. These connections let you explain all sorts of interesting things. Unfortunately, good, clear writing stays on topic. It stays focused. During revision, most of what you write should be deleted. You think the extraneous topics are somewhat related, bring color, bring humor, or whatever post-justification you come up with, but almost always they don’t.

In Q, the narrator constantly lets his thoughts go off on tangents. A few of them are humorous, critiques of modern culture. Most are not. Here’s an example. A character refers to a cemetery, “I recommend Cypress Hills in Queens.” The next segment begins, “The cemeteries of New York are hidden treasures. In 1852, …” It starts a brief history of cemeteries in New York. A few pages later we get a brief history of the Cayman islands as a tax shelter. A few pages later we get something else.

There seem to be certain keywords that come up in the narration which signal a page or two long digression on those words. It gets tiresome. I think there are at least two ways to meander in an effective way. The first is à la Don DeLillo. In particular, White Noise has these same types of digressions, but they are always very focused on the main theme of the book. This makes the book read like a well-chosen collection of essays tied together by a narrative arc. The digressions are very short and humorous. It is stylistically a similar idea, but can hardly be called meandering because of how directed it is.

The other possible way to make these meandering cultural references in a way that doesn’t annoy me is to incorporate them directly into your writing style. Pynchon was a master at this in Gravity’s Rainbow. He used a dense, complicated style in order to seamlessly incorporate the digressions right into the sentences. What I’m trying to say is that meandering to other topics can be done if you are extremely careful with how you do it. It can be an effective technique when handled properly. It is almost never handled properly.

2. Knowing you’ve done something wrong or questionable and justifying it in the work itself. It is one thing if you have a plot hole or you are meandering and don’t notice it. These things happen. It is another if you try to justify these things through your writing, because having characters comment on it tells the reader that you were aware of the problem. If you were aware of the problem, then you should just fix it! Don’t justify it to me.

This happens twice in Q. I won’t give the plot away, but I will say that the main character’s future self comes back and tries to convince him to not marry the love of his life for certain reasons. When you read this, you will immediately think of several less extreme ways around the problem. It doesn’t make sense why he must resort to not marrying her.

Immediately after this, the two main characters discuss an episode of The Twilight Zone in which the main drama could have been avoided by much simpler means. The one character is annoyed at the simple solutions and points this out. The other one tries to justify why the episode was made with these plot holes. The author has used the characters to tell the reader: I know you’ve realized that this is an unreasonable scenario, but this is why I’ve decided to leave it. It annoys me. If you realize it is unreasonable, then change it.

Later on a similar thing happens. The main character writes a story. It gets criticized, “These random tangents come across as quite flippant in your writing and, I must say, they are similarly off-putting on an interpersonal level…You should work to curtail it. Being random is utterly unbecoming.” Sound familiar? The main character then tries to justify why they are there. It is the author explaining that he has a bunch of random tangents in the book, and he realizes they don’t really work. But here’s why he kept them. Don’t justify the mistake. Fix it.

3. Pacing. There may be a few extraordinarily talented writers who can do this on their own, but for the most part pacing must be determined by readers. Even still, it is probably subjective beyond all hope. If someone reads what you wrote and thinks a part dragged on too long or it ended too quickly or they wanted to hear more about something, then they are probably onto something. If you wrote it, then you are probably too close to the material to understand what the reader is experiencing.

It is very difficult to do, but you need some readers who you trust and then actually trust their judgment. To me, Q‘s pacing is all wrong. The first visitation and major decision is the first half of the book. Then something like 10 more visitations happen in the second half. Some of them are only a few pages long. I actually really, really like the second half of the book, because it develops the major theme. Proper pacing could have saved this book for me. It would go from fair to excellent if the first half were cut down and the visitations afterwards were expanded to be roughly the same size.

To point out that I understand what the author was doing and am not an idiot, I’ll explain the theory behind why he did what he did. There was no reason to drag out the later visitations any longer than a few pages to get to the main theme of the book. By making each visitation shorter and shorter, it brought about a sense of acceleration in pacing to the end. In theory this idea is exactly what I would have done. An example of a masterful execution of this technique would be Tolstoy’s The Death of Ivan Ilyich.

In the actual finished product, I felt like the flawed (for the reasons mentioned above) first visitation being the longest made the first half drag a lot. The second half went by too quickly for me, because those stories were less flawed and more interesting. They were also how the main theme was developed. You could still have some decrease in length of the visitations as you get later in the book to create that sense of driving towards the end.

The sudden pacing shift made the first and second half seem like totally disjoint and unrelated works. My proposal wouldn’t change any content, but would make the whole thing feel more balanced and would have prevented me from almost giving up several times in the first half.

I’m sorry to be down on a book I actually ended up enjoying. It was fun and had me laughing out loud at times, and the lesson to take away was excellent. My wrath would have come down on basically whatever book I had most recently read. I hope to make a habit of these posts, because it is quite the clarifying exercise to write my thoughts on these issues down. I can already think of 3 more without any effort.


Leave a comment

How Hard is Adding Integers for a Computer?

In our modern world, we often use high level programming languages (Python, Ruby, etc) without much thought about what is happening. Even if we use a low level language like C, we still probably think of operations like {1+1} yielding {2} or {3-2} yielding {1} as extremely basic. We have no appreciation for how subtle and clever people had to be to first get those types of things to work.

I don’t want to go into detail about how those actually work at the machine level, because that would be a pretty boring post. I do want to do a thought experiment that should bring up some of the issues. Suppose you want to make a new programming language to see how one does such a thing. You think to yourself that it will at least be able to add and subtract integers. How hard could that be?

To play around a little you decide you will first make an integer take up 4 bits of memory. This means when you declare an integer {x = 1}, it gets put into a space of size {4} bits: {0001}. Each bit can be either a {0} or a {1}. Things seem to be going great, because you are comfortable with binary notation. You think that you’ll just take an integer and write its binary representation to memory.

Just for a quick refresher, for 4 bits this means that your integer type can only encode the numbers from {0} to {15}. Recall that you can go back to base {10} by taking each digit and using it a coefficient on the appropriate power of {2}. Thus {0101} would be {0\cdot 2^3 + 1\cdot 2^2 + 0\cdot 2^1 + 1\cdot 2^0 = 5}.

Things are going well. You cleverly come up with an adding function merely by manipulating bits with allowable machine level operations coming directly from the operating system. Then you test {15 + 1}. Woops. You overflow. This is the first problem, but it isn’t the interesting one I want to focus on. Even if you have a well defined integer type and a working addition function, this doesn’t mean that adding two integers will always result in an integer! There is an easy rule you think up to determine when it will happen and you just throw an error message for now.

Now you move on to subtraction. Oops. You then realize that you have no way of representing negative numbers with your integer type. If you haven’t seen this before, then you should really take a few moments to think about how you would do this. The “most obvious” solution takes some thought, and turns out to be terrible to use. The one that people actually use is quite clever.

The first thing you might try is to just reserve either the first or last bit as a way to indicate that you are positive or negative. Maybe you’ll take {1xxx} to be negative and {0xxx} to be postive. For example, {0001} is {1} and {1001} is {-1}. First, notice that this cuts the number of postive integers you can represent in half, but there isn’t a way around this. Second, there is a positive and negative “0” because {1000} is supposedly {-0}. This will almost certainly cause a bigger headache than it is solves.

Lastly, that adding function you wrote is meaningless now. Fortunately, people came up with a much better solution. It is called two’s complement notation. We just weight the most significant bit with a negative. This means that {1010} would convert to {-2^3 + 0\cdot 2^2 +2^1 + 0\cdot 2^0 = -6}. This makes all the numbers that start with 1 negative like our earlier example, except there is only a single 0 now because {1000} is {-8} (our most negative integer we can represent).

Moreover {3-2 = 3 + (-2) = 0011 + 1110 = 0001 = 1} (if we chop off overflow, yikes). So plain old addition works and gives us a subtraction. Except, sometimes it doesn’t. For example, take {0111 + 0001 = 1000}. This says that {7+1= -8}. This is basically the same overflow error from before, because {8} is not an integer that can be represented by our 4 bit type. This just means we have to be careful about some edge cases. It is doable, and in fact, this is exactly what C does (but with 32 bit integers).

Just to wrap up, it seems that to make this hobbled together solution of merely representing and adding integers work we want to make sure that our language is strongly typed (i.e. we know exactly how big an integer is so that we know where to place that leading 1 indicating a negative and the type isn’t going to change on us).

Just consider if we tried to prevent overflow issues by making a “big integer” class that is {8} bits instead of {4}. We try to do {3-2} again, and upon overflow we switch the type to a big int. We would then get {3-2= 0001 \ 0000} which is {16}. This means we have to be really careful when dealing with multiple types and recasting between them. It seems a minor miracle that in a language like Ruby you can throw around all sorts of different looking types (without declaring any of them) with plus signs between them and get the answer you would expect.

That brings us to the main point of this post. It is a really, really good thing we don’t have to worry about these technicalities when writing programs. The whole point of a good high level language is to not be aware of all the tedious machine level computations going on. But this also means that most people have no appreciation for just how complicated something as simple as adding two integers can be (of course, this is all standardized now, so you probably wouldn’t even worry about it if you were writing your own language from scratch).


Leave a comment

Thoughts on In the Beginning was the Command Line

I’ve been meaning to read Neal Stepheneson’s In the Beginning was the Command Line for quite some time. It pops up here and there, and I’ve never been able to tell what was in it from these brief encounters. Somehow (OK, I was searching for NetHack stuff) I ran into Garote’s page (Garrett Birkel). He is a programmer and in 2004 wrote some comments in with the full original 1999 essay here. This gave a nice 5 year update to Stephenson’s original. It has been 10 years since that update. I don’t plan on doing much commentary, but I did want to record thoughts I had as they came up.

In the first half of the essay there is a long diatribe on American culture. My first large scale thought is that this should be removed. There are some really nice clarifying analogies throughout the essay, but this is not one. It adds probably 1000 words and layers of unnecessary confusion. A good analogy is the Morlock and Eloi from The Time Machine as the types of people using computers. It doesn’t take long to describe and illustrates the point. Having a huge political discussion about TV ruining people’s brains and being easily distracted by shiny objects is an essay in and of itself and not a useful discussion for the main points.

Speaking of main points, I should probably try to distill them. One is that graphical user interfaces (GUIs) as opposed to the original command line prompt were created for the wrong reason. This led to a lot of bad. It is unclear to me from the essay whether or not this is supposedly inherent in the GUI concept or just because of the original circumstances under which they were made. Another main idea is the history of Linux. It is well-done, but you can find this type of thing in a lot of places. The more interesting historical description was of BeOS, because this is far less known. The last main argument is about the monopoly that proprietary OSes have in the market. We’ll get to that later.

Notes (I don’t distinguish whether the quotes are of Garote or Stephenson, sorry):

“All this hoopla over GUI elements has led some people to predict the death of the keyboard. Then came the Tablet PC from Microsoft — and people have been complaining ever since, because most things that you would use a computer for, still involve letters and numbers.”

This was without question the right thing to say in 2004. Ten years later our move to tablets and phones as our primary computers is so close to being a majority that Microsoft revamped Windows as if no one uses a laptop or desktop anymore. It was widely criticized as a mistake, but I think it says a lot about how far we’ve come since 2004. It may not have been a mistake if they waited 2 more years.

“Even before my Powerbook crashed and obliterated my big file in July 1995, there had been danger signs.”

It is interesting to me to see how much emphasis is put on “losing files” throughout this essay. It seems a point that the 2004 comments still agrees with. I certainly remember those days as well. I’m not saying it doesn’t happen now, but “cloud computing” (which I now just call “using a computer”) is so pervasive that no one should lose work anymore. I could format my hard drive and not lose anything important because my work is stored all over the world on various servers. It would take a major, organized terrorist-level catastrophe to lose work if you take reasonable precautions. I have a 2 TB external storage device to do regular back-ups on, but it just feels a waste now.

“Likewise, commercial OS companies like Apple and Microsoft can’t go around admitting that their software has bugs and that it crashes all the time, any more than Disney can issue press releases stating that Mickey Mouse is an actor in a suit.”

It is interesting that even back in 1999 this was clear. The proprietary OSes had to keep up appearances that they were better than the free alternatives. Despite the marketing that you were actually buying quality, the OSes you paid for were bigger, slower, had fragmentation issues, were more likely to crash, and got viruses. The Windows bloat is so big now (over 20 GB!) that older laptops will waste half their space just for the OS. In effect, the OS you paid for was worse than Linux in every way except for the intuitive GUI and a few select professional-grade programs.

In 2014, the GUI issue is fixed. The switch from Windows 7 to Ubuntu is less jarring and more intuitive than the switch from Windows 7 to Windows 8. I claim even the most computer illiterate could handle some of the modern Linux distributions. Now you basically pay for the ability to pay for a few high quality programs. There are certain professions where this is worth it (mostly in the arts, but certainly use Linux for work in STEM areas), but for the average person it is not. Now that WINE is better, you can even run those specialized Windows programs easily in Linux.

The next section is an anecdote about how difficult it was to fix a bug in the Windows NT install on one of Neal Stephenson’s computers versus the simplicity of getting help with Debian. This whole section is basically making the argument that a for-profit software or OS must maintain the illusion of superiority to get people to buy it. This means they hide their bugs which in turn makes it hard to fix. Open source encourages bugs to get posted all over the internet. Thousands of people see the bug and have an opportunity to try to fix it (as opposed to the one, possibly incompetent customer service rep you tell). The solution, when found, usually very quickly, will be posted for all to see and will be incorporated into the next release.

I’m not sure if the cause/effect that is proposed is really the reason for this (he admits later that there are now bug reports for Microsoft and Apple, but they are very difficult to find/use), but it matches my own experiences as well. I only note this here, because I often hear that you are “paying for customer service” or “support” when you choose Windows over Linux. For the above reasons I just don’t believe it. If the Linux community somehow stopped being so darn helpful so quickly, then maybe this would be a point in favor of proprietary software. I don’t see that happening any time soon.

The last part of the essay on the monopoly might be a little outdated. When I buy a computer and Windows comes as part of the package, I feel a little cheated because the first thing I do is delete it. Why did I just pay for something that I was going to immediately delete? The reason this is outdated is because in 2014 you can find “Linux boxes” that come with Linux installed in place of Windows. They are harder to find, so you don’t have as many choices, but this is a big improvement from 2004 where 100% of Dell (or whatever) computers came with Windows.


1 Comment

Why Play Roguelikes?

In the past I’ve written about video games as an important experience for people who value art. In honor of junethack, a month-long NetHack tournament, I want to defend roguelike games in general, and NetHack in particular, as a means of providing an experience that is difficult to get from most art.

I should first tell you what a roguelike game is. Roughly speaking it is a game that reproduces a few of the key innovations from Rogue, a game that released in 1980! There’s a lot of debate about what constitutes a roguelike game, but that is besides the point of this post. For us, there are two main ideas.

The first is that the levels are randomly generated. This means that every time you play you will have no idea what the levels will look like. This makes the process of discovery fun even after playing it 100 times. In fact, your character is often randomly generated along with items, weapons, and so on. This means that no matter how many times you play, you will have to think on your feet for how best to deal with the situations you are given (this will come up later).

The other main, and arguably the most important, feature is so-called “permadeath,” which stands for permanent death. This means that if your character dies, then the game is over. You must start all over again from scratch. You can’t save and restart from where you made that mistake. You don’t have multiple lives. This feature is probably what turns most modern gamers off of the style. It is brutal and unforgiving. One small mistake made while you zoned out for a few seconds can cost you hours or even days of work.

Despite the seemingly unfair nature of these games (randomness + permadeath can mean something totally out of your control ends hours of work), they still seem to thrive. People in my circles enjoy modern indie roguelikes such as The Binding of Isaac, Spelunky, and FTL (Faster than Light). Every year there is the 7 Day Roguelike Challenge where you create a roguelike in 7 days. There are even cash prize tournaments such as this recent one for Isaac.

This brings me to the current NetHack tournament (which is just for fun). NetHack is one of the earliest roguelikes. It was released in 1987, yet it’s difficulty and complexity make it widely played to this day. I wouldn’t put its artistic merit in the same camp as those earlier posts which focus on traditional aspects like story, music, and visuals. Don’t get me wrong. You better be familiar with classic literature and mythology if you play this. This ranges from basic things like don’t look at Medusa or you’ll be turned to stone to more obscure things like don’t touch that cockatrice or you will turn to stone. Overall, the internal story is not its strong point though.

I think the reason to play roguelikes in general and NetHack in particular is what it teaches you about the impermanence of all things. This is such an important life lesson that many Eastern religions make this a focal point. For example, take Tibetan Buddhists. They have a ritual where they painstakingly craft an incredibly detailed sand mandala. The work takes days. Then they destroy it.

Many modern roguelikes downplay the pain of permadeath by making the game fairly short. If you die, then you lose 20-40 minutes of work in a worst-case scenario. NetHack embraces the philosophy of permadeath. You can put in 12 hours or more carefully crafting a character. You are extremely careful to be fully mindful at all moments. You’ve thought about all 40,000 keystrokes that you’ve made to this point.

Then maybe you get impatient and press left one too many times and die. Maybe you stop paying careful attention for just a tiny moment. Maybe you just didn’t realize that a sea creature could grab you and pull you in. Maybe it is completely out of your control and a black dragon spawns due to pure randomness and blasts you with disintegration. All your work is gone forever.

Maybe I just think about stuff too much, but when this happens to me it really forces me to confront the idea of impermanence. All of the scenarios I just listed have corresponding counterparts in real life. Maybe you are a really careful driver, but in a moment of impatience you don’t look both ways and you are in a serious accident. It only took that tiny moment. Maybe you didn’t realize that the intersection gave the other person the right of way. Maybe you were careful, and randomness put you in a position where the other person was drunk.

The point is that your actions and choices have consequences that are sometimes irreversible. Randomness has consequences that are sometimes irreversible. Just as the Buddhist ritual teaches you this lesson and let’s you think about it before the consequences are real, NetHack also teaches you this lesson. This may seem silly to people who haven’t experienced putting a whole day of effort into something that gets lost forever, but it really makes you think about these issues and which of your choices led you there.

I know. It’s just a game. But that is my case for why you should play roguelikes; especially long and involved ones like NetHack. They force you to confront the consequences of risky decisions. They force you to encounter impermanence. They teach you about regretting what happens when you are careful for hours and then you impatiently slip up. They teach you what type of person you are when these things happen. This is their artistic value, and it is a type of experience that is hard to find in more traditional art forms.


Leave a comment

Critical Theory through If on a winter’s night a traveler

I recently read Italo Calvino’s If on a winter’s night a traveler. Before trying to explain what made this book so entertaining for me to read, I’ll try to sketch an outline of the book if you haven’t heard about it. The overall form consists of alternating chapters. Half the chapters are in second person and refer to “you” the reader. It tells you how you are reading the other half of the chapters and what you are doing between reading those.

The other half of the chapters consists of “short stories” which are fragments of novels. Thus the whole book is in a sense a novel, because it has one overarching story in which you are the protagonist. But it is also a book of short stories which runs the gamut with style and genre. The frustrating thing is that you keep getting stopped right when each story starts to get interesting. There is no closure. The reasons for being interrupted start becoming weirder and sillier (and we’ll see there is good reason for that).

It starts with a bad binding. You go to the store to replace it. Every time you keep getting what you think is the full version of the book only to find out that it is actually a different book. One time you are in a college seminar and the seminar only needs part of the book to do their analysis, so no one has the full thing. By the end, the reasons become much stranger as you enter a Kafka-esque prison situation. The absurdity of the reasons and even conspiracy behind it should keep a smile on your face. As you approach the end of the book, it reads like Pynchon.

Let’s answer an easy question first. What’s up with the title? Part of what is nice about the form of the book is that it tells you what to think sometimes. The book as a whole is a commentary on the falseness of novels. Classical novelists try to give you the sense that what they write is a neat and tidy story. There is a beginning, a middle, and an end. In reality, you are just getting a snippet of the character’s lives.

Calvino writes this explicitly near the middle of the book, “Everything has already begun before, the first line of the first page of every novel refers to something that has already happened outside the book.” The book could almost be read as satire in how it comically exaggerates this point by giving you a bunch of fragments of books that never amount to anything. This is the point of the title. All of the books get cut off with no sense of closure, so why not emphasize the point by making a title that feels cut off?

I think basically everyone that reads this book will have gotten that far. They will be aware of how the literary devices fit right in with other postmodern works of that time (late 70’s early 80’s). It is subtly self-referential and comments on what you are reading as you read it. People will probably pick up on the fact that the book is filled with imitation. Allusions to Borges with infinite regressions, labyrinths, and huge libraries are all over the place.

I can tell this will be a long post, because at this point I haven’t even started commentating on what I think most people will miss. I now want to argue that the book takes you on a historical tour of critical theory by example. By this I mean that each segment presents a different mode of reading a text and theory behind the relationship between writer and reader. As you move through the book, you see the evolution of these ideas.

The book starts with a very simplistic and intuitive approach which can be linked back to Aristotle’s Poetics. The writer writes a book, and the reader reads it. Novels consist of mythos, ethos, etc. Good books make you feel something, and this is catharsis. The book doesn’t use these terms, but “you” the reader essentially describe the reading process with another character in classical pre-modern critical terms (plot, character, etc).

Soon you go to a place where books are made and your simple philosophy of reading starts to become confused. “Now you understand Ludmilla’s refusal to come with you; you are gripped by the fear of having also passed over to ‘the other side’ and of having lost that privileged relationship with books which is peculiar to the reader: the ability to consider what is written as something finished and definitive, to which there is nothing to be added, from which there is nothing to be removed.” This is the start of the problem of hermeneutics maybe as seen by Heidegger and Gadamer. The book starts introducing these early problems of getting at meaning and whether authorial intent is important in interpretation.

We then start moving on to the “New Criticism.” We get to something along the lines of Wimsatt and Beardsley’s famous essay “The Intentional Fallacy.” The main character starts to believe that meaning comes from the reader. Calvino writes, “If you think about it, reading is a necessarily individual act, far more than writing. If we assume that writing manages to go beyond the limitations of the author, it will continue to have a meaning only when it is read by a single person and passes through his mental circuits.”

We then start moving on to the structuralism of Levi-Strauss. In “The Structural Study of Myth” he shows that you can put texts into categories based on which mythological structure it follows. Calvino writes, “What is the reading of a text, in fact, except the recording of certain thematic recurrences, certain insistences of forms and meanings?” This is a succinct way of summarizing that essay.

Then we get a parody of the Derrida school and the deconstructionist response. This comes in the form of giving such a close reading that the text gets pulled apart into just a list of the words that appear most frequently. This part of the book is pretty interesting, because as is noted, you feel that you do have a good sense of what the book is about based on merely a close, fragmented study of the words it uses.

Then we move on to the school of Deleuze and postmodernism. This is where foundations were ripped apart. In what I imagine to be a parody of the dense, confusing style of these writers, Calvino writes, “Perhaps my true vocation was that of author of apocrypha, in the several meanings of the term: because writing always means hiding something in such a way that it then is discovered; because the truth that can come from my pen is like a shard that has been chipped from a great boulder by a violent impact, then flung far away; because there is no certitude outside falsification.”

By the end, Calvino starts to backpedal a bit. Despite being a book without conclusions, I think he wants to take this quick tour through the critical tradition and pull out of the endless trap it sets up. His conclusion is interesting, because it seems to foreshadow the “New Historicists” which wasn’t a movement at the time he wrote this. He writes, “The conclusion I have reached is that reading is an operation without object; or that its true object is itself. The book is an accessory aid, or even a pretext.”

It would be interesting for someone to take the time and make a more convincing argument that this is what he is doing. I think a much stronger case can be made, and even a finer tuning of the trends in thought can be found. Since this is merely a blog post, I didn’t have the space or energy to do that. Examples that I think fit would be to add in Lacan/Freud, Marx, and Adorno/Horkheimer.


1 Comment

How Bad is External Fragmentation under a First-Fit Algorithm?

Sometimes I read about operating systems, because I want to know how my computer works. I’ve run Linux distributions as my primary OS for about 10 years, so having a familiarity with these concepts allows me to greatly customize and optimize my system. One topic that the general public is probably interested in is “What is fragmentation, and why am I supposed to defrag my computer sometimes?”

What I’m going to describe is not how memory management is actually done on any modern operating system, but it will give a feel for why fragmentation happens (and what this even is). I realized this really simple algorithm would be easy to program a simulation for, so we can see just how bad it can get.

The first-fit algorithm is exactly what it sounds like. You visualize memory as a bar that fills up. Each program has a size, and you look for the first hole big enough for it to fit. Here’s a picture I made to illustrate:

What’s going on? You download the blue program, and it goes into the first area no problem. You download the red, and the first available hole is right next to the blue. Same for the green. Then you realize you don’t want the red anymore, so you delete it. This leaves a hole. When you download the purple program your memory management notices the hole and tries to fit it there. Woops. The hole was too small, so it moves on to find the first hole that it fits in (hence the name “first-fit algorithm”). Eventually you may delete the green as well, and something might go in the big hole leftover from the red and green, but it probably won’t fill the whole thing.

Once these holes start appearing, they are hard to get rid of. This is external fragmentation. I wrote a simulation of this to see just how bad it can get. Although the program does nothing more than the simple thing I just showed, it is kind of long and confusing, so let’s take it one step at a time.

from matplotlib import pyplot as pp
import numpy as np
import random

def randomSize():
	return random.randint(1,10)

def randomTime():
	return random.randint(1,20)

class Program(object):
	def __init__(self):
		self.size = randomSize()
		self.position = -1
		self.time = randomTime()
		
	def getSize(self):
		return self.size
	def getPosition(self):
		return self.position
	def getTime(self):
		return self.time

	def decrement(self):
		self.time -= 1

I made two classes. The first one allowed me to pretend like I was downloading some program. Each program came with a random size from 1 to 10, the position where it was placed into memory, and the amount of time until I deleted it (a random number from 1 to 20). I’m still really confused about whether or not it is proper python style to use getter methods rather than just directly accessing attributes…

class Memory(object):
	def __init__(self, size):
		self.size = size
		self.contents = []

	def getContents(self):
		return self.contents
	def numPrograms(self):
		return len(self.contents)
	def getSize(self):
		return self.size

	def checkHole(self, i, p):
		return p.getSize() <= self.contents[i+1].getPosition() - (self.contents[i].getPosition() + self.contents[i].getSize())

	def addProgram(self, p):
		n = self.numPrograms()
		tmp = 0
		if n == 0:
			self.contents.append(p)
			p.position = 0
		elif n == 1:
			if self.contents[0].getPosition() >= p.getSize():
				p.position = 0
				self.contents.append(p)
			else:
				self.contents.append(p)
				p.position = self.contents[0].getSize()
		else:
			if self.contents[0].getPosition() >= p.getSize():
				p.position = 0
				self.contents.append(p)
				tmp = 1
			if tmp == 0:
				for i in range(n-2):
					if self.checkHole(i,p):
						self.contents.append(p)
						p.position = self.contents[i].getPosition() + self.contents[i].getSize()
						tmp = 1
						break
			if tmp == 0:
				if p.getSize() <= self.getSize() - (self.contents[n-1].getPosition() + self.contents[n-1].getSize()):
					self.contents.append(p)
					p.position = self.contents[n-1].getPosition() + self.contents[n-1].getSize()

	def sort(self):
		self.contents = sorted(self.getContents(), key=lambda p: p.getPosition())

	def removeProgram(self):
		for p in self.getContents():
			if p.getTime() == 0:
				self.contents.remove(p)

I’m sorry that this is probably the worst python code you’ve ever seen. This class is just the stick of memory. It has a total size that we will take to be 100 in our simulations and it has a list of the programs that are currently in memory (the programs keep track of their positions). I made a helper function that checks whether or not a given program fits into the hole between the i-th and (i+1)-th programs.

Originally I thought this helper function would make the code much simpler, but it is almost entirely useless which is why this looks so bad. Then I made the function that adds a program into the memory. This just looks for the first available hole to stick it into. Unfortunately, I ended up not seeing a slick uniform way to do it and cobbled together something by cases. I check the hole before the first program separately and the hole after the last program separately.

I made a sort method, because when I appended a new program to the list it was just stuck at the end despite having a position that could be in the middle. This just orders them by position. Lastly, I have a program that gets removed if enough time has passed.

def simulate(memSize, numTimeSteps):
	m = Memory(memSize)
	tmp = 0
	for i in xrange(numTimeSteps):
		m.addProgram(Program())
		for p in m.getContents():
			p.decrement()
		m.removeProgram()
		m.sort()
	for p in m.getContents():
		tmp += p.getSize()
	print float(tmp)/memSize
	memArray = []
	for i in xrange(len(m.getContents())-1):
		memArray.extend([1 for j in range(m.getContents()[i].getSize())])
		memArray.extend([0 for j in range(m.getContents()[i+1].getPosition()-(m.contents[i].getPosition() + m.contents[i].getSize()))])
	memArray.extend([1 for j in range(m.getContents()[m.numPrograms()-1].getSize())])
	memArray.extend([0 for j in range(100- len(memArray))])
	x = [i for i in range(100)]
	ymin = [0 for i in range(100)]
	a = np.array(memArray)
	pp.vlines(x, ymin, a)
	pp.ylim(0,1)
	pp.show()
	
simulate(100, 100)

Lastly I just simulate for 100 time steps. At each time step I try to stick in one randomly generated program. I then make all the time-to-deletion parameters decrease. I check whether any are 0 and remove those. I sort the list, and then repeat.

Here are some of the final results:

frag1

This one had 66% memory use, so external fragmentation led to about 34% waste.

frag2

This one was slightly better at 69% use.

frag3

This one was quite a bit worse at 43% use and hence 57% waste. The full code can be found here. It might be fun to play around with different sizes of memory and lengths of time to delete stuff. Fragmentation will obviously not be a huge problem if the amount of space you try to use is sufficiently less than your total memory.

On the other hand, as these simulations show, the answer to the title question is that it can be pretty bad. These simulations are actually accurate (if my textbook is correct), because you can mathematically prove that the expected space you waste with this algorithm is roughly 33% (I ran a lot more simulations than I showed you and this was what I usually saw).

There is no need to fear, though. Old operating systems did use some variant on this, but all modern operating systems use something called “paging” which completely avoids external fragmentation (there is something called internal fragmentation that I didn’t talk about). I hope you enjoyed this change of pace. I had fun doing it. It may look like it was a lot of work, but the program was done in one sitting. If anyone knows how to make an “animation” where you can watch the whole process unfold that might be cool. I googled a bit, and couldn’t find an obvious way to do it.

Update 5/17/14: I pushed a cleaned up version of this code to github. It is amazing what one night of sleeping on something will do. I got rid of most of the indices using itertools, and I realized that the proper helper function was to just make a method that returns the right endpoint of the position of a program in memory. Then finding a hole size was just subtracting and could be reused everywhere.

Just for fun, I decided I would “prove” to you that this algorithm is only bad when you use a sufficiently large amount of memory (i.e. probably never on a modern computer!). Here’s such a simulation result where your programs are too small for the wasted space to ever matter:

frag4

Follow

Get every new post delivered to your Inbox.

Join 182 other followers