Operating System, used by us everyday, is the "lowest" level for most people even including programmers, which hiding machine details and presenting us a highly abstraction of a cold machine executing bit streams. In this article, I'll cover the basic concepts of operating system, such as how OS originated, different kinds of OSes, how a modern OS formed and OS kernel.
How Operating System originated
As we illustrated in another article of mine about Von Neumann computer architecture, data are stored in binary, CPU is working in a fetch-decode-execute finite state machine. Facing such a obscure machine, computer professionals used to write binary streams by themselves and made them in a punched paper card at beginning because a computer was just a big calculator at that time. However, with the scale and complexity of computer and software became larger and larger, some problems started occuring such as Software Crisis; Computer scientists felt it obliged to develop a human-readable language which could be translated into binary machine instructions and develop a software to manage tasks and works in order to free human's workloads; The former is Compiler, and the latter is Operating System. Here's a interesting philosophical dilemma: "which came first, the chicken or the egg?". Looking back to our topic, you may wonder: "which came first, the compiler or the operating system?". Modern OSes such as Unix and Windows are 80% wroten in C. And a C compiler serves as an application software inside an OS. Woo, sounds interesting. I'm not gonna talk more on this issue in this article, I believe after you build a solid knowledge hierachy in Computer Architecture, Operating System and Compiler, you'll finally get the answer by yourself ;)
What is Operating System
It's hard to give a very accurate definition of what is an operating system. In general, an operating system is a software running in the kernel mode of a processor, which hides obscure details of hardware and provides a highly abstraction of hardware. Data in a computer is not human-readable, what an operating system does is present those data with a human-readable format. Modern operating systems have the following functionalities:
(1) Presenting a "virtual machine", an abstraction of complicated machine in hardware that's user-friendly.
(2) Managing resouces of a computer such as processor executing, memory, devices, files. Without an OS, we need to manage those resouces by ourselves: assign executing sequence of programs; calculate how many memories we need and what chunk of memory are available when writing a program; what program is prior to use the peripheral device ..
Operating System Organization
A modern universal operating system could be roughly divided into the following parts: kernel, system call, runtime library and shell. Different OSes may vary in the structures as I'll illustrate in the next section, the picture below shows the roughly structure of a modern operating system.
Kernel, as a set of programs, is the core of whole OS which have the previledge to run all instructions of a given CPU. The kernel has important functionalities such as CPU resource management, Memory management, Device management and providing system calls for upper-layer to interact with machine.
System call is a set of functions provided by kernel that shell or programmers could use to "access" the machine. For example, in UNIX, read is a system call to access the hard disk drive to get data; Unix shell could use it and programmers could also use it in a program.
Runtime library, also as a set of low-level routines, provide some functions for shell and programmers to use. The different between runtime library and system call is that runtime library is totally based on system calls. For example, the well-known function malloc is in C runtime library function for dynamic memory allocation. However, in different systems, malloc is implemented differently. In Linux, when the program need dynamic memory under 128KB, malloc invokes Linux system call brk; when the dynamic memory needed is over 128KB, Linux system call mmap is invoked; That's how malloc is implemented in Linux. However, in Windows, things are totally different. Thus, runtime library is a set of routine calls beyond the system call. To implement a functionality, the programmer could use either library functions or system call; For example, if we need to read a file in our program, we could either use Linux system call read or fread in glibc (an implementation of C standard library). Needless to say, using libray functions are sometimes friendly to transplant program between different platforms.
Shell is a user interface for access to services of an operating system. Shell could be in GUI (graphical user interface) or CLI (command-line interface).
What's inside the kernel
Kernel is core of the whole operating system, as just illustrated, which directly interacts with hardware. It provides policies based on mechanisms provided by processor such as process and thread scheduling, memory management and device management. The picture below illustrates components inside an operating system.
As the picture shown above, We could grasp a more detailed concept on how the kernel works:
Interacting with CPU
All programs on an operating system is being "seperated" into corresponding processes. Process is an abstraction of a running program, containing basic information of that program such as codes, state and priority. A program has one or more processes. Each process has its own virtual memory space, memory management module in kernel are responsible for transfer virtual address to physical address; And that is why when we compile a C program in a 32-bit Unix environment, the address of code are always starting with 0x8048... There's also an abstracion of program called Thread. I'll talk about the relationship and difference between process and thread in the next several blogs.
Interacting with Memory
As just said, each process has its own virtual address space, thus it's operating system that is responsible for memory management. The advantage for virtual address space is that programmers could write program without thinking of memory managing issues; In addition, seperate memory space ensure the security of the system. To manage the memory and translate virtual memory to physical meomry, there're two major ways: paging and segmentation. I'll discuss them in detail in the next following blogs. In addition, single address space is also put forward by some scholar. I've read a paper about this in my advance operating system class.
Interacting with Disks
Disks, as a periphral device, are slow componnets in a computer system. Hard disk drives or solid state drives has their own disk controller or ssd controller hiding mechanical details and providing rather abstract interface to chipset. What the block device driver in an operating system do is to interact with chipset connecting hard disk drive and providing a more abstract interface to the upper level. For example, in a hard disk drive, when we need to access a file in disk, we invoke the I/O system call (typically read() or write()), and then in kernel mode, file abstract path was translated to the LBA address, when the command is passed to the hard disk, the LBA address will be put into CHS address (Cylinder-Header-Sector), so the disk could choose the cylinder, move it's head and select the data (file) that program want.
Interacting with Input and Output device
Input devices are usually keyboard and mouse, while output devices are typically printer and screen. The details on these devices are quite difficult to describe. If possible, i'm gonna write another article on charactor device drivers and GPU related.
Interacting with Network
Since Internet emerged in 1990s, now the world is connected. A computer without network connection has limited functionalities. How to connect with Internet is the responsibility of an operating system. For example, when we type www.google.com in a web browser, the broswer invokes system call to tell the operating system I want to request connection to the given URL, the port is 80, and application level protocol is HTTP. Then the NIC driver encapsulate data package eventually to physical level (bit streams), and ready to send out through Network Interface Card (NIC). The procedule of network accessing is extremely complicated, and I just make a extreme abstraction on how it works :)
Monolithic-Kernel vs. Micro-Kernel
A monolithic kernel is one single program that contains all of the code necessary to perform every kernel related task. Every part which is to be accessed by most programs which cannot be put in a library is in the kernel space: Device drivers, Scheduler, Memory handling, File systems, Network stacks. Most of Unix-like operating systems have a monolithic kernel. The advantage of monolithic-kernel is that is quite to easy to implement by the UNIX developer Ken Thompson. However, this design has several flaws and limitations such as diffculty in coding for you don't want to login as root to debug a simple user program; components in a monolithic kernel are highly organized with high cohesion and low coupling, thus a bug in one component could easily affact other components; Monolithic Kernels are often very large and difficult to maintain.
A micro kernel provides only very basic mechanisms with a little policy in the kernel as possible. Then, the bulk of OS can be implemented as user-level processes. The mininal mechanisms that a micro kernel typically has are process and threads, memory address space and IPC. The advantage of a micro kernel is obvious: components are organized with high coupling and low cohesion, thus debug and maintain a micro kernel is quite easier than that of a monolithic kernel. However, the cost of micro kernel is the overhead in system service invoking. Consider the file system is implemented in user space, then every time we need to access disk, we need to trap into kernel, and then kernel interacts with file system service in user space through IPC.
Modern operating systems, such as Mac OS and Windows NT, combines the advantage of monolithic-kernel and micro-kernel. Thus, they're also called Hybrid kernel or Modular kernel. They are similar to micro kernels, except they include some additional code in kernel-space to increase performance. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure micro kernels can provide high performance. These types of kernels are extensions of micro kernels with some properties of monolithic kernels. Unlike monolithic kernels, these types of kernels are unable to load modules at runtime on their own. Hybrid kernels are micro kernels that have some "non-essential" code in kernel-space in order for the code to run more quickly than it would were it to be in user-space. Hybrid kernels are a compromise between the monolithic and microkernel designs. This implies running some services (such as the network stack or the filesystem) in kernel space to reduce the performance overhead of a traditional microkernel, but still running kernel code (such as device drivers) as servers in user space.
One thing need to be emphersized is that traditionally monolithic kernels are now at least adding (if not actively exploiting) the module capability. The most well known of these kernels is the Linux kernel. One interesting thing is that the design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous debate between Linus Torvalds and Andrew Tanenbaum. The modular kernel essentially can have parts of it that are built into the core kernel binary or binaries that load into memory on demand. It is important to note that a code tainted module has the potential to destabilize a running kernel. Many people become confused on this point when discussing micro kernels. It is possible to write a driver for a microkernel in a completely separate memory space and test it before "going" live. When a kernel module is loaded, it accesses the monolithic portion's memory space by adding to it what it needs, therefore, opening the doorway to possible pollution.