Memory System who holds everything - data, instructions we need to let the machine run correctly plays an important role in an operating system. In this article, I'll cover some basic knowledge in memory system and memory management in terms of operating system.
Segmentation and Swapping
Segmentation is a mechanism that divided physical memory into chunks with different sizes according to how many byte of memory the process needs. The two factors of a segment are base and limit, base specifies the starting address of this segments and limit specifies the maximum size of this segment. Since the size of segments are not the same, thus segmentation are quite flexible when allocating memory with no internal fragmentation; However, its disadvantages are also obvious such as causing more and more external fragmentation with time flies. (One solution for it is to move segments together and shrinks the unused space in a thread periodically, which is called memory compaction). The following picture shows the segmentation priciples.
Swapping could be implemented based on segmentation. A process could be temporarily swapped out of memory to hard drive, and then swapped into memory for continued execution, which is called swapping. Deciding which process should be swapped out of memory could based on their running time and priority. There're three frequently used replace algorithm for segments:
First Fit: Always choose the first available hole in memory.
Best Fit: Always choose the optimal hole in memory whose size best fit the segment.
Worst Fit: Always choose the largerst available hole in memory.
The picture above shows how those three methods work. First fit will reduce overhead; Best Fit and Worst Fit are trade-offs. It seems best fit will largely reduce external fragmentation but it's not always true.
Paging and Virtual Memory
In segmentation, memory in the same process should be contiguous while paging is a totally different mechanism. In paging, physical memory are divided into fixed-size blocks called frames, while logical memory are divided into the same size called pages. Each process has its own address space which could equal to the capacity of physical memory which gives programmer an illusion that this process could use the whole physical memory. Thus, it is hardware or operating system that is responsible for mapping the virtual page to physical frames, which is the key part in virtual memory. The following picture show how paging works.
Page Table, TLB and MMU
Since both the physical and logical memory are divided into fixed-size chunks called frame and page respectively, and a frame and a page have a mapping relationship, we need a data structure to store the pairs of these mappings - which is called page table. TLB (Translation Lookaside Buffer) is a hardware cache recording part of the page table to reducing page table lookup time, just as i-cache or d-cache to instructions and data in memory. MMU (Memory Manage Unit) is a hardware that has special port access to the page table in main memory. By using virtual page number as index to lookup in TLB first, if TLB miss, then lookup in page table, MMU could get the physical frame number, then plus the offset in page, finally get the physical address to the corresponding virtual address. More detailed knowledge on this part, please see anther two posts.
Single Page Table
Single page table is easy to implement and manage, however storage overhead of single page table is pretty high. Let's say a machine of 32-bit in both address bus and data bus width, the page size and frame size are both 4KB. Assuming each process could address the whole 4GB memory space, in-page or in-frame offset takes 12 bits, thus each tag in page table takes 32 - 12 = 20 bit; Thus we need 220 entry in the page table of a process. Each PTE (Page Table Entry) takes up 4 Bytes (32 bits). The page table size could take up 220*4B = 4MB. That means, we need to create 4MB page table for a process at the very beginning of the process starts, because they're contiguous so we cannot create some parts at beginning and adding the rest later; Most of the page table entry are never used, which is a total waste of memory.
Multi-level Page Table
Multi-level page table using an indexing method; Now look back at the single page table above, page size is 4KB (12 bits for in-page offset). And we split the rest 20 bits into two 10 bits represent page directory and page table respectively. Both PDE (Page Directory Entry) and PTE (Page Table Entry) are 4 Bytes. The picture below shows the structure of a two-level page table.
When I first learned paging in OS class, I find hard time understanding why using multi-level page table could save space compared with single page table. In fact, using multi-level page table could NOT save space and may even be 4KB more than that in single page table in the worst case. However, most the program only takes a small fraction of 4GB's physical memory; Thus, we could create the only 4KB's page directory and several page tables of 4KB that we current need. If this process need more in the future, the only thing we need to do is to adding the 4KB aligned physical address into corresponding entry in the page directory. It is this indexing mechanism that makes the size of multi-level page table more flexible than that of one-level page table.
Now I'm gonna use a simple example to show how multi-level page table could save more space that single page table. Look at the following recursive function calculating fibonacci sequence: (no tail recursion)
/* test.c */
#include <stdio.h>
int fib(int n) {
int a[20]; // a[20] Just for fun :)
if (n == 1 || n == 2)
return 1;
else
return fib(n-1) + fib(n-2);
}
int main() {
int n, m;
scanf("%d", &n);
m = fib(n);
return 0;
}
When this source code is compiled, the compiler doesn't know how many instance of fib() will be in activation record until the user's input is passed into the first fib() instance. Consider in a single page table system, no matter n is small or large, 4MB page table must be created. In a two-level page table system, when n is large enough that this process could take up whole 4GB memory, page table size would be several KB in the first and gradually increasing and should be 4MB+4KB in the end; However, if n is small, this process could only take few KB for page table.
Inverted Page Table
In previous sections we've discussed on multi-level page table and its advantages on 32-bit machines. However, when it moves to 64-bit, it will have several problems no matter in single page table or in multi-level page table. If we use single page table and page size is still 4KB. Then we need 252*8B; If we use multi-level page table, we will need 5 - 6 times memory access, terrible performance.
One possible solution is to use inverted page table. In troditional page table, the key is virtual page and the value is physical frame while in inverted page table, the key is physical frame and the value is virutal page. The figure below shows how inverted page table works.
The advantage of inverted page table is obvious: it could reduce the overhead of page table both in time and space. However its disadvantages are also noticeable. First, the look-up overhead may be very high if we use a full-associative mechanism. To avoid this, we could use a hash table to restrict the mapping rules. Second, if we use hash table, entry conflict exists and it is difficult to share memory between different processes.
Page Replacement Strategies
Virtual memory space is designed to be larger than physical memory space. Pages of different processes will be swapped in and out between RAM and Disk; And the swapping overhead is very high. So it's very important to design a efficient page replacement algorithm. I'll introduce several frequently used page replacement strategies.
First In First Out (FIFO)
This is one of the most simple strategy as its name. The first came page will be swapped out first when the physical memory is full. This strategy is easy to implement but sometimes its performance sucks because some frequently used pages which came first will be swapped out by FIFO.
Not Recent Use (NRU)
This strategy keeps track of the usage of each page and always discards the least recent used item. To implement NRU, we could set 2 bits R and M. When the page is accessed (read or write), the OS set R bit, and when the page is newly swapped in, the OS set M bit.
Least Recent Use (LRU)
This strategy keeps track of the usage of each page and always discards the least recent used item. This strategy is complicated to implement in software for we need to maintain a linked list for each page, frequent used page are near the head of linked list while least recent used page are near the tail of the linked list. Every memory access will trigger a move for this linked list; The overhead is quite high. We could use hardware to reduce the overhead.
This strategy seems to be a reasonable and effective method, however it will stii suffer from performance lost for the following situation. Assume we have 4 physical frame, and 5 pages A, B, C, D, E. And the access pattern is ABCDEABCDE ... ABCDE. This system will always undergo swapping and page fault using LRU algorithm.
Least Frequent Use (LFU)
This strategy bookkeeps the frequency of each page and always discards the least frequent used item. This strategy is quite complex to implement.
Second Chance and Nth Chance
Working Set and Resident Set