Operating System Synchronization and Scheduling Concepts

CPU Scheduling and Timing Metrics

CPU Scheduling is the process of deciding which of the processes in the ready queue should be allocated to the CPU for execution. It is necessary when a process switches from running to waiting, terminates, or when a new process is created.

Scheduling Criteria (Times)

The performance of a CPU scheduler is measured using several metrics, often expressed in terms of time:

  • CPU Utilization: The fraction of time the CPU is busy executing processes. (Goal: Maximize)
  • Throughput: The number of processes completed per unit of time. (Goal: Maximize)
  • Turnaround Time: The total time from the submission of a process until its completion (Completion Time – Arrival Time). (Goal: Minimize)
  • Waiting Time: The total amount of time a process spends waiting in the ready queue. (Goal: Minimize)
  • Response Time: The time from the submission of a request until the first response is produced (not final output). (Goal: Minimize)

Scheduling Algorithms

The algorithms determine how the next process is selected. They can be Preemptive (can interrupt a running process) or Non-Preemptive (the process runs until it completes or blocks).

Semaphore Mechanism (Sleeping)

A Semaphore is a synchronization tool used to solve the critical section problem and coordinate concurrent processes. It is an integer variable that is accessed only through two standard atomic (indivisible) operations: wait() and signal().

  • wait(S) (or P): Decrements the semaphore value S. If S ≤ 0, the process is blocked and placed in a waiting queue associated with the semaphore (the process goes to sleep).
  • signal(S) (or V): Increments the semaphore value S. If there are processes blocked on the semaphore, one is woken up (the process wakes up).

Implementation with Sleeping (Blocking Semaphores)

This is the standard, efficient implementation:

When a process executes wait(S) and finds S ≤ 0, instead of busy-waiting (wasting CPU cycles), the process is placed into a waiting queue for that semaphore and its state is changed to waiting (it sleeps).

When another process executes signal(S), it checks the waiting queue. If the queue is not empty, it takes a process out of the queue and changes its state to ready (it wakes up), allowing it to contend for the CPU.

Process and Thread Comparison

Process

A Process is a program in execution. It is the unit of work in a modern time-sharing system.

Characteristics

  • Heavyweight: Each process is an independent entity with its own dedicated resources.

Components

  • Text Section: The program code.
  • Data Section: Global variables.
  • Heap: Dynamically allocated memory.
  • Stack: Local variables, function parameters, return addresses.
  • Process Control Block (PCB): Stores all information about the process (state, program counter, registers, memory limits, etc.).

Context Switching

Switching between processes is slow because the OS must save and reload the entire state (CPU registers, memory map, etc.) from the PCB.

Thread

A Thread (or Lightweight Process) is a basic unit of CPU utilization. It is contained within a process.

Characteristics

  • Lightweight: Multiple threads can exist within the same process.
  • Sharing: Threads within the same process share the text, data, and heap sections, and most OS resources (like open files).

Components (Private)

  • Thread ID
  • Program Counter (PC)
  • Register Set
  • Stack

Context Switching

Switching between threads of the same process is much faster than switching between processes because only the thread-specific state (PC, registers, stack pointer) needs to be saved and restored.