Synchronization Problems in Concurrent Systems

Synchronization Problems

1. What is the difference between the Reader-Writer problem and the Bounded Buffer problem?

The Reader-Writer Problem and the Bounded Buffer Problem are both synchronization problems, but they differ in the following ways:

  • Problem Focus:
    • Reader-Writer Problem: Deals with multiple readers accessing a shared resource concurrently while ensuring that writers get exclusive access to modify the resource.
    • Bounded Buffer Problem: Involves managing a fixed-size buffer where producers add data and consumers remove data, ensuring the buffer doesn’t overflow or underflow.
  • Type of Shared Resource:
    • Reader-Writer Problem: A shared resource that can be read concurrently but written exclusively.
    • Bounded Buffer Problem: A buffer or queue with limited capacity where data is produced and consumed.
  • Synchronization Requirement:
    • Reader-Writer Problem: Allows multiple readers to access the resource simultaneously but ensures that writers have exclusive access.
    • Bounded Buffer Problem: Ensures that producers wait if the buffer is full and consumers wait if the buffer is empty, preventing race conditions.
  • Use Cases:
    • Reader-Writer Problem: Used in scenarios like file systems or databases where multiple processes may need to read data concurrently.
    • Bounded Buffer Problem: Used in message queues or task scheduling where producers generate tasks and consumers process them from a finite buffer.

2. What is the Dining Philosophers problem?

The Dining Philosophers Problem is a classic synchronization problem where five philosophers sit at a round table, each thinking and occasionally eating. To eat, a philosopher needs two forks, one to their left and one to their right. However, the forks are shared between neighboring philosophers.

The main challenges are:

  • Deadlock: Philosophers may get stuck waiting for forks, preventing anyone from eating.
  • Starvation: Some philosophers may never get a chance to eat if others keep taking the forks.

The problem requires designing a solution that ensures philosophers can eat without causing deadlock or starvation, typically using synchronization mechanisms like semaphores or mutexes to control access to the shared forks.

Basic Setup:

  • Each philosopher has:
    • A left fork and a right fork.
  • The philosopher must pick up both forks before eating and then put them down after eating.
  • Philosophers alternate between thinking and eating.

Challenges:

  • Deadlock: This happens when all philosophers pick up their left fork at the same time and are waiting for the right fork, resulting in a cycle where no philosopher can proceed.
  • Starvation: This happens when a philosopher is perpetually denied the opportunity to eat because other philosophers keep grabbing the forks first.

Deadlock

1. Define deadlock. What are the four conditions necessary for deadlock to exist? Explain four methods of deadlock prevention. State the time complexity of deadlock avoidance and deadlock recovery.

Deadlock is a situation in a concurrent system where a set of processes is blocked because each process is holding a resource and waiting for another resource held by another process, creating a circular wait. As a result, none of the processes can proceed, and the system enters a state of indefinite blocking.

Four Conditions Necessary for Deadlock to Exist:

  1. Mutual Exclusion: At least one resource must be held in a non-shareable mode, meaning that only one process can access a resource at any given time. If another process requests the same resource, it must wait until the resource becomes available.
  2. Hold and Wait: A process that is holding at least one resource is waiting for additional resources that are currently being held by other processes.
  3. No Preemption: Resources cannot be forcibly taken away from a process holding them. They must be released voluntarily by the process after completing its task.
  4. Circular Wait: A set of processes must exist such that each process in the set is waiting for a resource held by the next process in the set, creating a circular chain of waiting processes.

Four Methods of Deadlock Prevention:

  1. Eliminate Mutual Exclusion: If possible, resources should be made shareable. For example, read-only resources can be shared among multiple processes. However, not all resources (e.g., printers, disk drives) can be shared, so this method is not always feasible.
  2. Eliminate Hold and Wait: Processes must request all the resources they need at once, before starting execution. This prevents a process from holding one resource while waiting for others. However, this could lead to resource inefficiency if processes request more resources than needed.
  3. Eliminate No Preemption: If a process is holding some resources and requests additional resources that cannot be immediately allocated, the system can preempt resources from the process and allocate them to other processes. This requires that resources can be safely taken away from processes.
  4. Eliminate Circular Wait: A strict ordering of resources can be enforced, such that processes request resources in a predefined order (e.g., always request resources in ascending numerical order). This prevents circular waiting because no cycle can form.

Time Complexity of Deadlock Avoidance and Recovery:

  1. Deadlock Avoidance:
  • Time Complexity: O(n²) for each allocation request, where n is the number of processes. This is because in deadlock avoidance (like the Banker’s Algorithm), the system must check the safety of the system before granting resource requests. The algorithm involves evaluating the current allocation state and checking if the system remains in a safe state after the allocation, which can require a check of all processes and resources.
Deadlock Recovery:
  • Time Complexity: Depends on the recovery method used:
    • Process Termination: If deadlock is detected by terminating processes, the time complexity is O(n), where n is the number of processes in the system. The system has to identify which processes to terminate.
    • Resource Preemption: If resources are preempted and reassigned to resolve deadlock, the complexity can be O(n * m), where n is the number of processes and m is the number of resources involved in the deadlock.

Scheduling

1. Illustrate the difference between preemptive and non-preemptive scheduling. Explain in detail how long-term, medium-term, and short-term scheduling take place with the help of a neat diagram.

Preemptive Scheduling

  • CPU can be taken away from a process before completion.
  • Generally more fair as it allows other processes to run.
  • Better response time for interactive tasks.
  • More frequent context switching due to preemption.
  • Examples: Round Robin, Shortest Job First (SJF) with preemption.

Non-Preemptive Scheduling

  • CPU cannot be taken away until the process finishes.
  • Can lead to unfairness if a long process runs first.
  • Poorer response time if a process monopolizes the CPU.
  • Less frequent context switching.
  • Examples: First-Come, First-Served (FCFS), SJF without preemption.

Scheduling Levels: Long-Term, Medium-Term, and Short-Term Scheduling

In multi-level scheduling, processes are moved between different queues based on their state or priority. Each level of scheduling serves a different purpose, and together they determine how processes are executed in a system.

  1. Long-Term Scheduling (Job Scheduling):
  • Purpose: Decides which processes are admitted into the system for execution.
  • Frequency: Infrequent.
  • Process Flow: It controls the degree of multiprogramming in the system, i.e., how many processes should be in memory. It loads processes into the system from the disk into the ready queue (main memory).
  • Example: If a system has a limited number of resources, the long-term scheduler may admit processes based on their priority or resource requirements.
Medium-Term Scheduling (Swapping Scheduling):
  • Purpose: Decides which processes should be swapped in and out of memory (e.g., paging or swapping).
  • Frequency: Occasional, depending on memory availability.
  • Process Flow: It temporarily removes processes from memory (swaps them out) and later reintroduces them into memory (swaps them in) when there is space or when the process needs to continue.
  • Example: When the system is running low on memory, it may swap a process out to the disk and later bring it back into memory when needed.
Short-Term Scheduling (CPU Scheduling):
  • Purpose: Decides which process in the ready queue should be allocated the CPU next.
  • Frequency: Very frequent (milliseconds to microseconds).
  • Process Flow: It directly controls the execution of processes on the CPU. The short-term scheduler selects the next process to run based on a scheduling algorithm (e.g., Round Robin, FCFS, SJF).
  • Example: When multiple processes are in the ready queue, the short-term scheduler selects the next process to execute based on its scheduling policy.