Operating System Memory & Deadlock Concepts Explained

Operating System Memory Management

Key Memory Management Concepts

Memory Management: How the OS handles allocation, tracking, and protection of memory used by processes.

Memory Abstraction: A way for the OS to give each process its own view of memory, isolating them and enabling multitasking.

Base Register: Stores the starting physical address of a process’s memory.

Limit Register: Specifies the range (size) of memory a process can access from its base.

Swapping: The act of moving processes in and out of main memory to free up space.

Fragmentation: Wasted memory space due to inefficient allocation (external = holes between blocks, internal = unused space inside blocks).

Paging: Dividing memory into fixed-size blocks (pages) that can be loaded into any physical frame.

Page Table: Data structure used by the OS to map virtual pages to physical frames.

MMU (Memory Management Unit): Hardware that translates virtual addresses to physical addresses.

TLB (Translation Lookaside Buffer): Small cache storing recent translations from virtual to physical addresses.

Virtual Memory: Technique allowing processes to use more memory than physically available by swapping pages in/out of disk.

Page Fault: Occurs when a process accesses a page not currently in memory.

Segmentation: Dividing memory into variable-size logical segments (code, stack, heap) that grow independently.

Compaction: Rearranging memory to eliminate fragmentation by moving processes.

Page Replacement Algorithm: Strategy used to decide which page to evict when loading a new one into memory.

Working Set: Set of pages a process has referenced in its most recent k memory accesses.

Memory Abstraction and Protection

Programs need memory to run, and they tend to expand to fill available space (Parkinson’s Law). Without abstraction, processes would overwrite each other. Memory abstraction solves this by isolating memory per process.

The Problem: No Memory Abstraction

If multiple programs are loaded without memory protection, they might clash and overwrite each other. The OS must relocate each program’s addresses based on where it’s loaded. For example, two programs of 16 KB loaded one after another cannot both think they start at address 0.

Base and Limit Registers

The Base Register stores the start address of a program, and the Limit Register stores the size of the program. The CPU checks all memory accesses to ensure base ≤ address ≤ base + limit. This mechanism prevents a process from touching another’s memory.

Memory Allocation Strategies

Swapping: Swapping moves processes in and out of memory to manage space. For example, if data segment and stack grow, they may collide. Swapping must ensure enough room for dynamic growth (heap, stack).

Allocation Algorithms

  • First Fit: Allocates the first hole large enough. Fast but can leave small, unusable gaps.
  • Next Fit: Like First Fit, but continues the search from the last allocation spot. Slightly better performance in some cases.
  • Best Fit: Finds the smallest hole that fits the request, leaving the smallest leftover space. Can lead to lots of tiny, unusable holes (fragmentation).
  • Worst Fit: Chooses the largest available hole, trying to avoid tiny fragments. Not very efficient in practice.
  • Quick Fit: Maintains a list of commonly requested sizes. Fast, but harder to keep lists updated after deallocation.

Free Space Management

  • Bitmaps: Each bit represents one memory block. 0 = free, 1 = used.
    • Pros: Simple, easy to find contiguous space.
    • Cons: Can be slow if memory is large.
  • Linked List: Each node represents a block of memory (start, size, and status). On process exit, adjacent free blocks can be merged.

Virtual Memory and Paging

Virtual Memory: Lets programs run without needing all memory at once. Each process gets an independent virtual address space. Only needed pages are loaded. Pages are swapped in/out from disk (backing store).

Paging: Breaks memory into fixed-size pages (usually 4KB). Each virtual page maps to a physical frame. The OS maintains a page table per process. Paging prevents external fragmentation but needs address translation.

MMU (Memory Management Unit): Hardware that converts virtual to physical addresses. Handles base + offset calculation, valid/invalid checks, and access control (read/write/execute).

Page Table Entry (PTE) Structure

Each entry in a page table holds:

  • Frame Number: The physical frame where the page resides.
  • Valid Bit: Indicates if the page is currently loaded in memory.
  • Dirty Bit: Indicates if the page has been modified since being loaded.
  • Referenced Bit: Indicates if the page has been used recently.
  • Protection Bits: Define access permissions (Read/Write/Execute).

Speeding Up Paging

  • Translation Lookaside Buffer (TLB): A cache that holds recent address translations. A TLB hit means fast access; a TLB miss requires checking the full page table.
  • Multilevel Page Tables: Splits large page tables into smaller levels (e.g., 2-level). Helps with sparse address spaces, saving memory.
  • Inverted Page Tables: One entry per physical frame, not virtual page. Saves memory but results in slower lookups (needs hashing).

Page Replacement Algorithms

When a page must be loaded and memory is full, one must be removed. These algorithms decide which page to evict:

  • Optimal (Not Implementable): Removes the page not used for the longest time in the future.
  • Not Recently Used (NRU): Uses R (Referenced) and M (Modified) bits to group pages. Replaces the page from the lowest class.
  • FIFO (First-In, First-Out): Removes the oldest loaded page. Simple but may evict frequently-used pages.
  • Second Chance: Like FIFO, but gives a page a second chance if its R bit is 1. If R=1, clears R bit and moves it to the end of the queue.
  • Clock Algorithm: Pages are arranged in a circular buffer. A pointer cycles through: if R=0, replace; if R=1, clear the bit and move on.
  • Least Recently Used (LRU): Replaces the least recently accessed page. Hard to implement exactly (needs timestamping).
  • Aging (LRU Approximation): Each page has an 8-bit counter. Shift right periodically, set the Most Significant Bit (MSB) to the R bit.
  • Working Set: Tracks all pages used in the last k accesses. Removes pages not in the working set.
  • WSClock: Combines Clock + Working Set, using the R bit, time of last use, and dirty bit.

Advanced Memory Management Topics

  • Local vs. Global Page Replacement:
    • Local: A process can only replace its own pages.
    • Global: Any page in memory can be evicted. Global replacement can cause process starvation.
  • Shared Pages & Libraries: Processes can share the same read-only code (like libraries). Each has its own data/stack, but shared text saves memory.

Page Fault Handling – 10 Steps

  1. Trap to Kernel.
  2. Save program state (registers, Program Counter).
  3. Identify the faulting virtual address.
  4. Check if it’s valid and access is allowed.
  5. If the current frame is dirty, write it to disk.
  6. Look up the disk address of the needed page.
  7. Load the page from disk into memory.
  8. Update the page table entry (mark valid).
  9. Resume the process.
  10. Retry the instruction that caused the fault.

Segmentation and Paging Combinations

Segmentation: Memory is divided into logical units (code, stack, heap). Each segment has its own base and limit. This makes it easier for individual segments to grow or shrink.

  • MULTICS: Uses a 34-bit virtual address = 18-bit segment number + 16-bit page number. Employs a segment descriptor table pointing to page tables.
  • x86 (Intel): Uses a segment selector + offset to generate a linear address, which then goes through the paging system. This allows fine-grained protection and memory control.

Understanding Deadlock in Operating Systems

Key Deadlock Concepts

Deadlock: A situation where a group of processes are all waiting for each other to release resources, but none can proceed.

Preemptable Resource: A resource that can be forcibly taken away from a process (e.g., CPU, Memory).

Nonpreemptable Resource: A resource that cannot be taken until voluntarily released (e.g., Printer, File Lock).

Resource Allocation: The process of giving resources (memory, devices, files) to processes upon request.

Resource Allocation Graph (RAG): A directed graph showing relationships between processes and resources. An edge from process → resource indicates a request. An edge from resource → process indicates assignment.

Mutual Exclusion: At least one resource is non-shareable – only one process can use it at a time.

Hold and Wait: Processes holding resources can also request new ones and wait for them.

No Preemption: Resources already held cannot be forcibly taken; they must be released voluntarily.

Circular Wait: A closed loop of processes where each one waits on a resource held by the next.

Safe State: A state in which the system can allocate resources to each process in some order without leading to deadlock.

Unsafe State: A state where deadlock is not guaranteed but is possible if processes make certain requests.

Deadlock Avoidance: Preventing deadlock by making sure the system never enters an unsafe state (e.g., Banker’s Algorithm).

Deadlock Prevention: Structurally breaking one or more of the four necessary deadlock conditions.

Deadlock Detection: Allowing deadlocks to occur, then detecting them through algorithms and resolving afterward.

Deadlock Recovery: Steps taken to recover from a deadlock, like killing processes or rolling back.

Banker’s Algorithm: A deadlock avoidance algorithm that checks if a resource request can be safely granted by simulating allocations.

Available Vector: A list showing how many instances of each resource are available.

Request Matrix: A matrix that shows how many resources each process is currently requesting.

Allocation Matrix: A matrix that shows how many resources each process currently holds.

Need Matrix: Shows how many more resources each process may still request.

Livelock: Processes continuously change state in response to each other but make no progress (they’re alive but stuck).

Resource Usage and Deadlock Conditions

Resource Usage Pattern

  1. Process requests a resource.
  2. If available, the OS allocates it.
  3. Process uses it.
  4. Once done, the process releases it.

What is a Deadlock?

A set of processes are stuck waiting for each other in a circular fashion – nobody can proceed because everyone’s waiting for someone else to release a resource.

4 Necessary Conditions for Deadlock (All 4 Must Hold)

  1. Mutual Exclusion: At least one resource must be non-shareable.
  2. Hold and Wait: A process holds at least one resource and is waiting for others.
  3. No Preemption: Resources can’t be forcibly taken away; they must be released voluntarily.
  4. Circular Wait: A closed loop exists where each process waits for a resource held by the next.

Resource Allocation Graph (RAG)

A visual way to represent resource assignments and requests.

  • Components: Circle = Process (P1, P2, …), Square = Resource (R1, R2, …)
  • Arrows:
    • P → R: Process is requesting resource.
    • R → P: Resource is assigned to process.
  • Deadlock in RAG: A cycle in the graph indicates a deadlock, especially if there’s only one instance per resource type.

Strategies to Handle Deadlocks

  1. Ignore the Problem (Ostrich Algorithm): Hope it doesn’t happen. Used in simple or real-time systems.
  2. Detection and Recovery: Let deadlock happen, then detect it and fix it.
  3. Deadlock Avoidance: Carefully allocate resources to avoid unsafe states. Requires future knowledge of maximum needs.
  4. Deadlock Prevention: Break one or more of the four deadlock conditions structurally.

Deadlock Detection

One Resource per Type

Uses cycle detection in the Resource Allocation Graph. A Depth-First Search (DFS) approach:

  1. Start at a process node.
  2. Follow Request/Assignment edges.
  3. If you revisit a node, a cycle exists, and a deadlock is detected.

Multiple Resource Instances

Uses four data structures:

  • Available Vector (A): Available resources.
  • Request Matrix (R): What each process wants.
  • Allocation Matrix (C): What each process currently has.
  • Marked List: Shows which processes can finish.

Algorithm Steps:

  1. Look for a process Pi such that: Request[i] ≤ Available.
  2. If found: Assume that it finishes and releases its resources: Available += Allocation[i], and then mark it as finished.
  3. Repeat until: all processes are marked (no deadlock), or if some cannot be marked (deadlock detected).

Deadlock Recovery Methods

  1. Preemption: Temporarily take a resource away from a process. Risky, as it can corrupt the process state.
  2. Rollback: Roll the process back to a safe checkpoint and re-execute from that point.
  3. Killing Processes: Terminate one or more processes to break the cycle. Choose the least costly to kill (e.g., shortest time, lowest priority).

Deadlock Avoidance: Banker’s Algorithm

Used when maximum resource needs are known in advance. Basic Idea: Before granting a request, simulate if the system would stay in a safe state. If yes, grant it; if not, block the request.

Banker’s Algorithm Steps

  1. Check if request ≤ need.
  2. Check if request ≤ available.
  3. Temporarily allocate and simulate finish: Update Available, Allocation, and Need Matrices. Then, try to find a sequence where all processes can finish.
  4. If a sequence exists (safe state) → grant request.
  5. Else → deny request.

Safe vs. Unsafe States

  • Safe State: The OS can find an execution order where every process finishes.
  • Unsafe State: No guarantee of safe execution. May cause deadlock. Unsafe does not equal deadlock, but it is a risk.

Deadlock Prevention – Breaking Conditions

To prevent deadlock, one or more of the four necessary conditions must be denied:

  • Deny Mutual Exclusion: Not feasible, as some resources are inherently exclusive.
  • Deny Hold and Wait: Force processes to request all resources at once. Downsides: Low resource utilization.
  • Deny No Preemption: Allow the OS to forcibly take resources away. Complex to implement for some resources.
  • Deny Circular Wait: Impose a global numerical order on resources. Request only in increasing order to avoid cycles.

Attacking Circular Wait – Resource Ordering

Number all resources. Processes must request resources in ascending order only. This prevents the formation of cycles.

Related Deadlock Concepts

  • Communication Deadlocks: Can occur in message-passing systems. Example: Two processes waiting to receive messages from each other results in a deadlock.
  • Livelock (Not Deadlock): Processes constantly changing state but making no progress. They aren’t blocked, just endlessly retrying (e.g., spinning in a loop). Example: Two people keep stepping side-to-side to avoid each other but never walk past.