Since I’ve been on a bit of a computer science kick lately (I even made a new “computer science” category to tag posts with) I thought I would do a follow up to my post on fragmentation.
I’ll summarize it so you don’t have to go reread it.
If you try to put stuff in memory in the most naive way by just looking for the first place it fits, then you end up with huge amounts of fragmentation (wasting around 33% of your available space). I also said that no modern operating system would do it this way.
To appreciate how great this idea I’m about to describe is, just take a moment and try to think of a different way of doing it.
When I learned about this, I actually tried and was stumped. Any algorithm I thought of was very similar to the first-fit one and left lots of external fragmentation. My hint is that the solution has no external fragmentation at all, and it requires a radically different way of thinking about the situation.
As far as I know, all modern operating systems solve the problem using something called paging.
The idea is actually really simple, even though the implementation can be quite confusing at first. The idea is that you use something called a page table which keeps track of where everything is located.
This allows you to physically break everything up so that they all can fit right next to each other, and you’ll have no external fragmentation.
Even though one chunk could then be spread all over the place physically, the page table allows you to logically think of it as one chunk, because you just look at the page table and read the addresses contiguously.
A page is just some fixed small size. Let’s suppose we want to put something that takes three pages of space into memory.
Maybe there’s not 3 consecutive pages left in memory, but there’s still room if you don’t look consecutively. The first 3 pages are taken, but then there are two available places, then another is taken, but then there is room for the last page.
I’ve represented these used pages as red boxes on the physical side:
In some sense this is still the first-fit algorithm, but now the chunks are all the same size, and we break everything into those chunks first. We then just start sliding the pages in wherever they fit without ever having to worry about fitting the whole thing together.
You can use the page table to look up where the pages were placed in physical memory. It is really clever, because, as a user, you can still think of things as single units, but you never (externally) waste any space.
There will still be a small amount of internal fragmentation which comes from things not being perfect multiples of the sizes of pages, but this will be minuscule comparatively.
Anyway, that’s all for today. I didn’t want to leave you all hanging. I’m sure you were all dying to know how your computer got around the problem I raised in that post.