Operating Systems: Core Concepts and Scheduling Explained

Q1. What is an Operating System? What are its two main roles?

An Operating System (OS) is system software that acts as an intermediary between the user and computer hardware. Its two main roles are:

  • Resource Allocator: Manages CPU, memory, I/O, and disk among multiple programs fairly and efficiently.
  • Control Program: Prevents errors and misuse by controlling program execution (e.g., stops one process from accessing another’s memory).

Q2. Differentiate between a Program and a Process

Program: A passive entity—an executable file sitting on disk. It has no state and no allocated resources.
Process: An active entity—a program currently loaded in memory and being executed. It has a state (New, Ready, Running, Waiting, Terminated), allocated CPU, memory, and open files.

Key point: One program can spawn multiple processes (e.g., two Chrome windows = two processes).

Q3. What is a PCB? List any three fields stored in it

A Process Control Block (PCB) is a kernel data structure that stores all information about a process.

Three key fields:

  1. Process State: Current state (New, Ready, Running, Waiting, Terminated).
  2. Program Counter: Address of the next instruction to execute.
  3. CPU Registers: All register values; saved/restored on every context switch.

Q4. What is Context Switching? Why is it overhead?

Context Switch: The act of saving the complete state of the currently running process (into its PCB) and loading the state of the next process.

Overhead: No useful user work is done during a context switch—the CPU only saves and restores PCB data. With hundreds of registers to save, this can take thousands of CPU cycles of wasted time.

Q5. Define Deadlock and the Four Necessary Conditions

Deadlock: A set of processes permanently blocked because each process is waiting for a resource held by another process in the same set.

Four conditions (ALL must hold simultaneously):

  1. Mutual Exclusion: Resource used by only one process at a time.
  2. Hold and Wait: Process holds one resource while waiting for another.
  3. No Preemption: Resources cannot be forcibly taken away.
  4. Circular Wait: A circular chain exists (e.g., P0 waits for P1, P1 waits for P2, …, Pn waits for P0).

Q6. What is a Thread? How does it differ from a Process?

Thread: The basic unit of CPU utilization. It has its own thread ID, program counter, register set, and stack—but shares the process’s code, data, heap, and open files.

Key differences: A thread is cheaper to create (no new address space), faster to context-switch, and communicates via shared memory (no IPC needed). One thread crash can kill the whole process; a process crash does not affect others.

Q7. Hard Real-Time vs. Soft Real-Time OS

Hard Real-Time: Missing a deadline results in total system failure. Used in life-critical systems: aircraft flight control, ABS brakes, pacemakers.
Soft Real-Time: Missing a deadline results in degraded performance, which is acceptable occasionally. Used in video streaming, audio playback, mobile gaming.

Q8. Turnaround Time and Waiting Time Formulas

Turnaround Time (TAT): Total time from process submission to completion.
Formula: TAT = Completion Time − Arrival Time

Waiting Time (WT): Total time a process spends waiting in the ready queue.
Formula: WT = TAT − Burst Time (or CT − AT − BT)

Q9. Can a system with a single process experience deadlock?

NO, in the classical sense. Circular Wait requires at least two processes forming a cycle. A single process cannot wait for itself.

Exception: A single process can cause a self-deadlock by trying to re-acquire a non-recursive mutex it already holds—but this is a programming error, not an OS-level deadlock.

Q10. What is the Convoy Effect in FCFS scheduling?

The Convoy Effect occurs when a long CPU-bound process arrives first and all shorter processes queue behind it.

Impact: Short I/O-bound processes wait a long time, I/O devices sit idle, and when the long job finishes, the CPU becomes idle too. Result: Both CPU and I/O utilization drop well below optimal. SJF eliminates this by running the shortest job first.

Q11. Monolithic Kernel vs. Microkernel

Monolithic Kernel: All OS services run in a single large executable in kernel mode.
Performance: Fast (direct function calls).
Reliability: Low (a bug in any component crashes the entire OS).
Extensibility: Harder (requires recompiling the kernel).

Microkernel: Only basic IPC, memory protection, and minimal scheduling stay in the kernel. All other services run as user-space server processes.
Performance: Slower (message-passing overhead).
Reliability: High (crashed drivers are restarted without affecting the kernel).
Extensibility: Easier (add services without kernel recompilation).

Q12. Process State Transition Diagram

The five primary states are:

  1. NEW → READY: OS allocates PCB and loads program.
  2. READY → RUNNING: Scheduler dispatches the process.
  3. RUNNING → WAITING: Process requests I/O or event.
  4. WAITING → READY: I/O or event completes.
  5. RUNNING → TERMINATED: Process finishes or is killed.

Note: Only one process can be in the RUNNING state on a single-core CPU at any time.

Q13. Benefits of Multithreading

  1. Responsiveness: If one thread blocks, others continue.
  2. Resource Sharing: Threads share address space by default.
  3. Economy: Thread creation is cheaper than process creation.
  4. Scalability: Threads can run in parallel on multi-core systems.

Q14. Inter-Process Communication (IPC) Models

  • Shared Memory: Maps physical memory into the address spaces of multiple processes. Faster, but requires synchronization (mutexes/semaphores).
  • Message Passing: Uses system calls to send/receive messages. Slower due to kernel copying, but simpler and works across networks.

Q15. Priority Scheduling, Starvation, and Aging

Priority Scheduling: CPU is assigned to the process with the highest priority. Starvation occurs when low-priority processes wait indefinitely. Aging solves this by gradually increasing the priority of processes that have been waiting for a long time.

Q16. Round Robin Scheduling

Round Robin (RR): Uses a fixed time quantum (10–100ms). If a process doesn’t finish, it is preempted and moved to the back of the queue.

Quantum Size Impact:
Too large: Becomes FCFS (poor response time).
Too small: Excessive context switching overhead (wasted CPU time).

Q17. Banker’s Algorithm and Deadlock Avoidance

The Banker’s Algorithm determines if a system is in a Safe State by simulating resource allocation. It ensures that the system can always satisfy the maximum needs of all processes. Limitations: Requires knowing max needs in advance and is computationally expensive.

Q18. Command Interpreter (Shell) Role

The shell translates user commands into system calls. It provides abstraction, enables scripting/automation, supports piping, and allows I/O redirection.

Q19. Symmetric Multiprocessing (SMP)

In SMP, multiple processors share memory and run an identical copy of the OS. All CPUs are peers, increasing throughput and providing graceful degradation if one CPU fails.

Q20. Batch OS vs. Time-Sharing OS

FeatureBatch OSTime-Sharing OS
InteractionNoneReal-time
MetricThroughputResponse time
SwitchingSequential10–100ms intervals