Operating Systems: Memory Management, File Systems, and Process Control

Q) Explain Why Logging Metadata Updates Ensures Recovery of a File System After a File System Crashes.

Logging metadata updates ensures recovery of a file system after a crash because it provides a record of all the changes that were in progress at the time of the crash. Here’s a more detailed explanation:

  1. Consistency & Atomicity

    When metadata updates are logged, the system ensures that all operations related to these updates are completed as an atomic transaction. This means either all changes are made, or none are, preventing partial updates which can lead to inconsistency.

  2. Journal Replay

    After a crash, the file system can look at the log (journal) to determine what operations were in progress. It can then replay these operations to ensure they are fully completed, or it can roll back incomplete operations to maintain a consistent state.

  3. Crash Resilience

    By logging before making actual changes, the system ensures that even if a crash occurs, the log contains enough information to either complete the interrupted updates or revert them safely. This avoids corruption & data loss.

  4. Faster Recovery

    Instead of checking the entire file system, the recovery process only needs to consult the log. This significantly speeds up the recovery time because only the logged operations since the last consistent state need to be processed.

Q) Define the Term Dispatch Latency & Thrashing? What Are the Reasons for the Occurrence of Thrashing? Explain in Detail.

Dispatch Latency

Dispatch Latency refers to the time it takes for the operating system to stop one process and start or resume another. This latency includes the following components:

  • Context Switching Time: The time required to save the state of the old process & load the state of the new process.

  • Scheduling Time: The time taken by the scheduler to select the next process to run.

  • Time to access the new process: Including any additional overheads such as updating the process control blocks (PCBs) & other related data structures.

Thrashing

Thrashing occurs when a computer’s virtual memory subsystem is overused to the point where it spends more time swapping data between RAM & disk than executing instructions. This leads to a significant drop in system performance.

Reasons for Thrashing

Thrashing occurs due to the following reasons:

  1. Insufficient Memory: When the system does not have enough RAM to accommodate all the active processes & their working sets.

  2. High Degree of Multiprogramming: If too many processes are loaded into memory simultaneously, the system may not be able to allocate enough frames to each process.

  3. Improper Page Replacement Policies: Inefficient page replacement algorithms may lead to frequent page faults.

  4. Large Number of Page Faults: If processes frequently access pages that are not in memory, causing a high number of page faults.

  5. Lack of Locality: Programs that do not exhibit good locality of reference (i.e., accessing data or instructions that are not near each other in memory) can cause excessive page faults.

  6. Large Program Sizes: Very large programs may not fit well into available memory, leading to excessive paging.

Q) When the Process is Going to Suspend From Running State Then it Becomes so Dangerous Why, Give Proper Reason.

When a process transitions from a running state to a suspended (or blocked) state, several potential issues could arise that might make it dangerous, depending on the context & system:

  1. Resource Inconsistency: If the process holds resources (e.g., files, memory, locks) when it is suspended, other processes might not be able to access these resources, leading to deadlocks or resource starvation.

  2. State Corruption: If a process is suspended in the middle of a critical operation, it could leave shared data in an inconsistent state. This could lead to corrupted data or unpredictable behavior when the process resumes.

  3. Timing Issues: Real-time or time-sensitive applications might miss deadlines if they are suspended, leading to performance degradation or failures in critical systems.

  4. Priority Inversion: In systems with priority scheduling, suspending a high-priority process could lead to lower-priority processes running instead, potentially causing important tasks to be delayed.

  5. Inter-process Communication (IPC) Delays: If a process involved in IPC is suspended, the communication can be delayed, causing other processes that depend on this communication to be blocked or waiting, which can reduce the overall system performance.

Q) Write Short Notes on the Following: (a) Memory Protection & Recovery Management (b) Structure of Page Tables with Appropriate Example (c) Counting & Binary Semaphore (d) Different Mechanisms Used to Protect File.

(a) Memory Protection & Recovery Management

Memory Protection:

Memory protection is a mechanism that prevents processes from accessing memory areas that they are not authorized to access. This is crucial for system stability & security. It ensures that a faulty process does not corrupt the memory space of another process or the operating system itself. Key techniques include:

  • Segmentation: Divides memory into different segments (code, data, stack) & uses segment descriptors to define the access rights.

  • Paging: Divides memory into fixed-size pages & uses page tables to manage access permissions.

Recovery Management:

Recovery management involves mechanisms to recover from hardware or software failures, ensuring system stability & data integrity. Techniques include:

  • Checkpointing: Periodically saving the state of a process so it can be restarted from a known state after a failure.

  • Rollback: Reverting the system or data to a previous stable state in case of failure.

  • Transaction Logs: Keeping logs of all changes so that the system can recover to a consistent state.

(b) Structure of Page Tables with Appropriate Example

Page Tables:

Page tables are data structures used in virtual memory systems to map virtual addresses to physical addresses. Each process has its own page table, which contains entries (page table entries, or PTEs) that map virtual page numbers (VPNs) to physical frame numbers (PFNs).

Example:

Consider a system with a virtual address space divided into pages of 4 KB each & a physical memory divided into frames of 4 KB each. If a process accesses virtual address 0x12345, the system uses the higher bits to find the VPN & then uses the page table to translate the VPN to a PFN.

  • Virtual Address: 0x12345

  • VPN Extraction (e.g., using top 20 bits for VPN): 0x12

  • PTE Lookup: The VPN 0x12 maps to PFN 0x5 in the page table.

  • Physical Address: 0x5000 (PFN 0x5) + 0x345 (offset) = 0x5345

(c) Counting & Binary Semaphore

Counting Semaphore:

A counting semaphore is a synchronization mechanism used to manage access to a resource with a limited number of instances. It can take non-negative integer values, where the value indicates the number of available resources.

  • Operations:

    • Wait (P): Decreases the semaphore value if it is greater than zero, or waits if it is zero.

    • Signal (V): Increases the semaphore value, indicating a resource has been released.

Binary Semaphore:

A binary semaphore, also known as a mutex, is a simpler form of semaphore that only takes values 0 & 1. It is used for managing mutual exclusion, ensuring that only one process accesses a critical section at a time.

  • Operations:

    • Wait (P): Sets the semaphore to 0 if it is currently 1; otherwise, the process waits.

    • Signal (V): Sets the semaphore to 1, allowing another process to enter the critical section.

(d) Different Mechanisms Used to Protect File

File protection mechanisms are crucial for ensuring data integrity, confidentiality, & proper access control. Common mechanisms include:

  1. Access Control Lists (ACLs): Specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects.

  2. File Permissions: Typically include read, write, & execute permissions, which can be set for the owner, group, & others.

  3. Encryption: Protects files by converting them into an unreadable format, which can only be decrypted with the correct key.

  4. Backup & Recovery: Regularly backing up files ensures that data can be recovered in case of corruption or loss.

  5. User Authentication & Authorization: Ensures that only authorized users can access certain files, usually through passwords or biometric verification.

Q) Why is Disc Scheduling Necessary?

Disk scheduling is necessary to optimize the performance & efficiency of the disk in a computer system. Here are some key reasons why it’s important:

  1. Minimize Seek Time: The time it takes for the read/write head to move to the correct track is known as seek time. Disk scheduling algorithms aim to reduce the total seek time by deciding the order in which read/write requests are serviced.

  2. Improve Throughput: By optimizing the order of disk access requests, disk scheduling can increase the number of requests that are serviced in a given time period, thereby improving the overall throughput of the system.

  3. Reduce Latency: Disk scheduling helps minimize the latency or delay experienced by processes waiting for I/O operations to complete.

  4. Fairness: Disk scheduling algorithms can ensure that all processes get a fair share of disk access time, preventing any single process from monopolizing the disk.

  5. Efficiency: By organizing data accesses more efficiently, disk scheduling can reduce the energy consumption & wear & tear on the disk hardware, which can extend the lifespan of the disk.

Q) What is the Purpose of Fork System Call?

The ‘fork()’ system call is used in Unix-like operating systems to create a new process. The new process created by ‘fork()’ is called the child process. The child process is a duplicate of the current (parent) process, except for a few details like different process IDs. Each ‘fork()’ call returns twice: once in the parent process & once in the child process. In the parent process, it returns the child’s PID, & in the child process, it returns 0.

Q) What is Thrashing? How Can it be Controlled?

Thrashing is a condition in computing where a system spends more time swapping pages in & out of memory than executing actual processes. This occurs when there is insufficient physical memory (RAM), causing excessive paging or swapping to & from disk, which significantly degrades system performance.

Causes of Thrashing:

  1. High degree of multiprogramming: Running too many processes simultaneously can lead to overuse of available memory.

  2. Insufficient RAM: Limited physical memory can force frequent swapping.

  3. Large processes: Processes that require more memory than available can cause constant paging.

  4. Poor locality of reference: Processes that frequently access a wide range of memory locations can increase page faults.

Control Measures for Thrashing:

  1. Increase RAM: Adding more physical memory can reduce the need for swapping.

  2. Adjusting the degree of multiprogramming: Reducing the number of concurrently running processes can help alleviate memory pressure.

  3. Improving process scheduling: Prioritizing processes with better locality of reference or those that require fewer resources can minimize thrashing.

  4. Using better page replacement algorithms: Algorithms like Least Recently Used (LRU) can help keep frequently used pages in memory.

  5. Load Control: Implementing load control mechanisms that temporarily reduce the load when thrashing is detected can stabilize the system.