Operating System Fundamentals: Memory, Scheduling, and Concurrency
Linux Memory Management Components
There are two primary components to memory management under Linux:
- The physical memory management system, which deals with allocating and freeing pages, groups of pages, and small blocks of memory.
- The virtual memory system, which handles memory mapped into the address space of running processes.
Physical Memory Management
The primary physical memory manager in the Linux kernel is the page allocator. This allocator is responsible for allocating and freeing all physical pages and is capable of allocating ranges of physically contiguous pages on request.
The Buddy Heap Algorithm
The allocator uses the buddy heap algorithm to keep track of available physical pages. A buddy-heap allocator pairs adjacent units of allocatable memory together, hence its name. Each allocatable memory region has an adjacent partner, or “buddy.” Whenever two allocated partner regions are both freed up, they are combined to form a larger region.
Virtual Memory Management
The Linux virtual memory system is responsible for maintaining the address space visible to each process. It creates pages of virtual memory on demand and manages the loading of those pages from disk or their swapping back out to disk as required.
Under Linux, the virtual memory manager maintains two separate views of a process’s address space:
- As a set of separate regions.
- As a set of pages.
Operating System Scheduling Objectives
Many objectives must be considered in the design of an effective scheduling discipline. A robust scheduling discipline should aim to achieve the following goals:
- Be Fair: A scheduling discipline is fair if all processes are treated equally, and no process suffers indefinite postponement.
- Maximize Throughput: The discipline should attempt to service the largest possible number of processes per unit time.
- Maximize Responsiveness: Maximize the number of interactive users receiving acceptable response times (i.e., at most a few seconds).
- Be Predictable: A given job should run in about the same amount of time and at about the same cost, regardless of the load on the system.
- Minimize Overhead: While overhead is commonly viewed as wasted resources, a certain portion of system resources invested as overhead can greatly improve overall system performance.
- Balance Resource Use: The scheduling mechanisms should keep the resources of the system utilized efficiently.
UNIX File System Structure and Path Names
Absolute and Relative Path Names
The UNIX file system utilizes both absolute path names and relative path names.
- Absolute Path Names: These start at the root of the file system and are distinguished by a leading slash (
/). Example:/usr/local/font. - Relative Path Names: These start at the current directory, which is an attribute of the process accessing the path name. Example:
local/fontindicates a file or directory namedfontin the directorylocalwithin the current directory.
Typical UNIX Directory Structure
A typical UNIX file system structure includes several standard directories:
- The root (
/) normally contains a small number of directories as well as/vmunix, the binary boot image of the operating system. /devcontains the device special files, such as/dev/console,/dev/lp0, and/dev/mt0./bincontains the binaries of the essential UNIX system programs. Other binaries may be located in/usr/binor/usr/local/bin(for systems programs written at the local site).- Library files (such as the C, Pascal, and FORTRAN subroutine libraries) are kept in
/lib(or/usr/libor/usr/local/lib).
The Critical Section Problem (Concurrency)
A Critical Section (CS) is a segment of code where a process may be changing common variables, updating a table, or writing a file. The task of ensuring that these segments behave atomically (allowing only one process access at a time) is known as the Critical Section Problem.
Solutions to the Critical Section Problem generally fall into two groups:
Hardware Solutions
Hardware solutions typically disallow interrupts, preventing any other process from taking CPU time while a Critical Section is executed.
Advantages of Hardware Solutions
- Does not require programming effort and is fast.
Disadvantages of Hardware Solutions
- Does not work reliably in a multiprocessor environment.
- Needs special hardware.
- Turning off interrupts creates danger for other interrupt-driven routines.
Software Solutions
Software solutions allow interrupts to happen but prevent any process that modifies shared resources from trespassing. This provides a software routine that manages access to the critical section, ensuring they are executed one at a time.
Advantages of Software Solutions
- Special hardware is not required.
- Does not affect processes that are not concurrent.
- Works for multiple processors.
Disadvantages of Software Solutions
- Slow and complex.
Two-Process Solutions Example
Consider a system consisting of n processes {P0, P1, …, Pn-1}. Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on.
Algorithm for Two Processes (P0 and P1)
For convenience, when presenting Pi, we use Pj to denote the other process (where j = 1 – i).
Our first approach is to let the processes share a common integer variable, turn, initialized to 0 or 1. If turn = i, then process Pi is allowed to execute in its critical section. This solution ensures that only one process at a time can be in its critical state.
