CPU Scheduling Algorithms and Inter-Process Communication
CPU Scheduling Algorithms
1. Determining Length of Next CPU Burst
Estimating the length of the next CPU burst can be done using exponential averaging or the length of previous CPU bursts.
2. Priority Scheduling
Each process is assigned a priority number, with the highest priority (smallest integer) being allocated the CPU. Shortest Job First (SJF) is a priority scheduling algorithm where priority is based on the predicted next CPU burst time. However, this algorithm can lead to starvation, where low-priority processes may never execute. Aging is a solution to this problem, where the priority of a process increases as time progresses.
3. Round Robin (RR)
In RR scheduling, each process receives a small unit of CPU time (time quantum). After the time quantum expires, the process is preempted and added to the end of the ready queue. This ensures that no process waits more than (n-1) * q time units, where n is the number of processes in the ready queue and q is the time quantum. RR typically has a higher average turnaround time than SJF but provides better response time.
4. Multilevel Queue
Multilevel queue scheduling uses separate queues for different types of processes, such as foreground (interactive) and background (batch). Each queue has its own scheduling algorithm, and scheduling decisions are made between the queues. Processes can move between queues, and aging can be implemented in this way.
5. Multiprocessor Scheduling
CPU scheduling becomes more complex when multiple CPUs are available. Homogeneous multiprocessors have identical CPUs, while asymmetric multiprocessors have only one CPU accessing system data structures. Symmetric multiprocessors (SMPs) have self-scheduling CPUs with a common ready queue or private ready queues. Processor affinity refers to a process’s preference for running on a specific CPU.
6. Linux Scheduling
Linux uses two scheduling algorithms: timesharing and real-time. Timesharing is a prioritized credit-based algorithm where the process with the most credits is scheduled next. Real-time scheduling gives the highest priority to the highest-priority process and uses FCFS and RR algorithms.
7. Real-Time Systems
Real-time systems require results to be correct and delivered within a specified time period. Embedded systems are computing devices within larger systems, while safety-critical systems have catastrophic consequences in case of failure. Hard real-time systems guarantee that real-time tasks are completed within their deadlines, while soft real-time systems prioritize real-time tasks over non-real-time tasks.
8. Real-Time CPU Scheduling
Periodic processes require the CPU at specific intervals (periods).
9. Rate Monotonic Scheduling (RMS)
RMS assigns priority based on the inverse of the period. Shorter periods have higher priority, while longer periods have lower priority.
10. Earliest Deadline First (EDF)
EDF assigns priority based on deadlines. Earlier deadlines have higher priority, while later deadlines have lower priority. EDF works well even when RMS fails, and preemption may occur.
11. RMS and EDF Comparison
RMS is a deeply elaborated algorithm used in many real-time operating systems (RTOSs) and must satisfy R <= > D. EDF keeps periodic process deadlines even at 100% CPU load, but the consequences of overload are unknown and unpredictable. When deadlines and periods are not equal, the behavior is unknown.
Inter-Process Communication
12. Cooperating Processes
Cooperating processes can affect or be affected by the execution of other processes, unlike independent processes.
13. Inter-Process Communication (IPC)
IPC mechanisms allow processes to communicate and synchronize their actions. Implementations include message systems, shared memory, and communication links.
14. Direct or Indirect Communication
Direct communication establishes links automatically between specific pairs of processes, while indirect communication uses mailboxes (with unique IDs) to direct and receive messages. Mailboxes can be shared by multiple processes.
15. Synchronization
Message passing can be blocking or non-blocking. Blocking is synchronous, meaning the sender blocks until the message is received, and the receiver blocks until a message is available. Non-blocking is asynchronous, meaning the sender continues executing after sending the message, and the receiver may receive a valid or null message. Combinations of blocking and non-blocking are often used.
16. Critical Section
A critical section is a part of code where one process accesses a resource shared with another process. Solutions to critical section problems include software at the application layer, hardware support for operations, or software solutions with OS support.
17. Semaphores
Semaphores are synchronization tools that solve critical section problems without requiring busy waiting.
18. Spin-Lock
A spin-lock is a general (counting) semaphore that uses busy waiting instead of blocking. It is used in multiprocessor operating systems to implement short critical sections.
19. Deadlock
A deadlock occurs when two or more processes wait indefinitely for an event that can only be caused by one of the waiting processes.
20. Starvation
Starvation occurs when a process is indefinitely blocked and may never be removed from the semaphore queue in which it is suspended.
21. Monitors
Monitors provide a convenient and effective mechanism for process synchronization. Only one process can be active within a monitor at a time, with options to wait or signal.
