Operating System Concepts: Processes, Threads, and Scheduling
CH 3 – Processes
Definition:
Process = program in execution (active); program = passive file.
Process layout:
Text (code) | Data (globals) | Heap (dynamic alloc.) | Stack (function frames).
States:
new → ready → running → waiting → terminated.
PCB (Process Control Block):
state, PC, registers, priority, memory info, I/O status, accounting.
Schedulers:
Long-term (job) → admit to memory (degree of multiprogramming); Short-term (CPU) → choose next ready process; Medium-term → swap in/out (memory management).
Context switch:
save/restore PCB (overhead time).
Creation & termination:
fork() duplicates parent (0 → child, PID → parent); exec() replaces program; wait() syncs parent/child; exit() normal end; abort() forced end.
Zombie:
child done no wait();
Orphan:
parent ended.
IPC (Inter-Process Communication):
Shared memory (fast, sync needed) vs Message passing (send/receive, blocking or non-blocking).
Mechanisms:
pipes (uni), named pipes (bi), sockets (IP/port), RPC (remote call). Rendezvous = blocking send/receive.
CH 4 – Threads
Thread:
unit of CPU execution (ID + PC + regs + stack). Threads in same process share code, data, and files.
Benefits:
Responsiveness (UI active during I/O); Resource sharing (easy communication); Economy (less overhead than process); Scalability (multi-core use).
Concurrency vs Parallelism:
Concurrency = overlap (logical simultaneity); Parallelism = true simultaneity (multi-core).
Amdahl’s Law:
Speedup = 1 / ((1–P)+P/N); serial portion limits gain.
Thread types:
User threads (in library) vs Kernel threads (OS managed).
Models:
Many-to-One (many user→1 kernel, blocks together); One-to-One (each user→kernel, true parallelism); Many-to-Many (flexible mapping); Two-Level (variant with binding).
Libraries:
Pthreads (pthread_create, pthread_join, pthread_exit); Windows (CreateThread); Java (extends Thread or Runnable).
Implicit threading:
Thread pools (reuse threads); OpenMP (#pragma omp parallel); Grand Central Dispatch (queues → threads).
Issues:
fork/exec behavior, signal delivery, thread cancellation (async/deferred), thread-local storage (TLS), scheduler activations (kernel↔user coordination).
CH 5 – CPU Scheduling
Goal:
↑CPU use, ↑throughput, ↓turnaround, ↓waiting, ↓response.
CPU/I-O burst cycle:
alternating computation and I/O wait.
Preemptive:
CPU can be taken mid-process;
Non-preemptive:
runs until block/exit.
Schedulers:
Short-term (ready queue); Long-term (admission); Medium-term (swapping).
Dispatcher:
context switch + mode switch + jump to user code (latency = delay).
Criteria:
CPU util ↑; throughput ↑; turnaround/wait/response ↓.
Algorithms:
• FCFS:
Non-pre; simple; convoy effect (long job delays short).
• SJF:
Non-pre; optimal AWT; requires burst prediction.
• SRTF:
Pre; shortest remaining time; reacts to new arrivals.
• Priority:
Either;
starvation possible → aging prevents.
• RR:
Pre; quantum (q = 10–100 ms); fair time-sharing.
• MLQ:
Multiple ready queues (foreground RR, background FCFS).
• MLFQ:
Feedback promotion/demotion → no starvation.
Formulas:
AWT = Σ(wait)/n; Little’s Law n = λ × W.
Thread scheduling:
PCS (user) vs SCS (kernel).
Multiprocessor:
SMP (self-sched); load balancing (push/pull); affinity (keep cache local).
Real-Time:
Hard (guaranteed) vs Soft (best effort). Rate Monotonic (short period = high priority); Earliest Deadline First (dynamic); Proportional Share (divide CPU).
OS Examples:
Linux CFS (fair runtime); Windows (32 priority levels); Solaris (6 classes TS/IA/RT/SYS/FSS/FP).
CH 6 – Synchronization
Concurrency:
overlapping execution of processes.
Race condition:
result depends on timing of shared access.
Critical section:
code accessing shared data.
Requirements:
(1) Mutual exclusion (1 at a time); (2) Progress (no indefinite delay); (3) Bounded waiting (limit turns waited).
Peterson’s Algorithm (2 proc):
flag[i]=true; turn=j;
While(flag[j]&&turn==j);
Critical section;
Flag[i]=false;
→ satisfies all three criteria.
Hardware support (atomic inst.):
test_and_set(target) (return old val, set TRUE); compare_and_swap(value,expected,new) → swap if match.
Spinlocks:
busy-wait (lock held briefly).
Mutex lock:
acquire(){while(!Avail); avail=false;}
Release(){avail=true;}
Ensures mutual exclusion; busy wait if contended.
Semaphores:
Integer S with atomic wait/signal:
wait(S){S--; if(S<0) block();}
Signal(S){S++; if(S<=0) wakeup(P);}
• Binary (0/1) = mutex; Counting (≥0) = resource count.
No busy-wait impl:
uses block/wakeup queues.
Producer–Consumer (using sem):
wait(empty); wait(mutex);
Add_item();
Signal(mutex); signal(full);
Consumer reverses order.
Deadlock:
circular wait (S,Q example). Starvation:
process never scheduled.
Monitors:
high-level sync object; only 1 thread active inside.
Condition variables: x.Wait() (block) | x.Signal() (wake one).
monitor BoundedBuffer{
Condition notFull, notEmpty;
Insert(item){if(full) wait(notFull); add(); signal(notEmpty);}
Remove(){if(empty) wait(notEmpty); take(); signal(notFull);}
}
Signal semantics:
signal-and-wait (victim runs next) vs signal-and-continue (signaler continues).
