Operating Systems Concepts: Processes, Threads, and Synchronization
Part I – Multiple Choice Questions
1. Bootstrap Program
Which of the following describes the function of a bootstrap program?
A) Is loaded at power-up or reboot
B) Initializes all aspects of the system
C) Loads operating system kernel and starts execution
D) All of the above
2. Preventing Infinite Loops
What can be used to prevent a user program from monopolizing system resources due to an infinite loop and never returning control to the operating system?
A) Portal
B) Program Counter
C) Firewall
D) Timer
3. Symmetric Multi-Processors (SMP) System
In a Symmetric Multi-processors (SMP) system:
A) Each processor is assigned a specific task.
B) There is a boss-worker relationship between the processors.
C) Each processor performs all tasks within the operating system.
D) None of the above
4. Clustered System
A clustered system:
A) Gathers together multiple CPUs to accomplish computational work.
B) Is an operating system that provides file sharing across a network.
C) Is used when rigid time requirements are present.
D) Can only operate one application at a time.
Part II – True/False Questions
5. A process is a passive entity. (F)
6. Each device type has a device controller with a local buffer that the CPU uses to move data between main memory and local buffers. (T)
7. System calls can be run in either user mode or kernel mode. (F)
8. The OS indexes into the I/O device table to determine device status and modify the table entry to include an interrupt. (T)
9. Interrupts may be triggered by either hardware or software. (T)
10. With DMA, only one interrupt is generated per block, rather than one interrupt per byte. (T)
Part III – Short Answer Question
11. Function of Interrupts and Interrupt Handling
An interrupt is a hardware-generated asynchronous event that causes the CPU to pause its current execution flow and respond to the interrupt.
Interrupt transfers control to the interrupt service routine, generally through the interrupt vector, which contains the addresses of all the service routines.
Interrupt handlers, which are configurable sets of instructions (functions), are executed in response to the interrupt event.
There are many types of interrupts, ranging from I/O completion to keyboard presses to timers. To quickly service these interrupts, systems often have an interrupt table, with each kind of interrupt corresponding to a particular index in this table. Each entry of the table contains the memory address of a particular interrupt handler to execute.
Part I – Multiple Choice Questions (Continued)
1. Operating Modes
The two separate modes of operating in a system are:
A) Supervisor mode and system mode
B) Kernel mode and privileged mode
C) Physical mode and logical mode
D) User mode and kernel mode
2. Message-Passing Model
A message-passing model is:
A) Easier to implement than a shared memory model for intercomputer communication.
B) Faster than the shared memory model.
C) A network protocol and does not apply to operating systems.
D) Only useful for small, simple operating systems.
3. Microkernels and Communication
Microkernels use _____ for communication.
A) Message passing
B) Shared memory
C) System calls
D) Virtualization
4. Advantages of Microkernel Approach
Which of the following is true about the advantage(s) of using a microkernel approach?
A) Easier to extend a microkernel
B) Easier to port the operating system to new architectures
C) More reliable
D) More secure
E) All of the above
Part II – True/False Questions (Continued)
5. The two modes of operating in a system are distinguished by using mode bits: 0 for user and 1 for kernel. (F)
6. Application programmers typically use an API rather than directly invoking system calls. (T)
7. The shared-memory model of process communication involves the kernel. (F)
8. The three general methods used to pass parameters to the operating system during system calls are: a) pass by registers, b) storing the parameters in a block or table and passing the address of the block to a register, and c) using stacks with two operations: push and pop. (T)
9. Direct Memory Access (DMA) is considered an efficient mechanism for performing I/O because it removes the CPU from being responsible for transferring data. (T)
Part III – Short Answer Question (Continued)
10. Why a Modular Kernel May Be the Best Design
The modular approach combines the benefits of both layered and microkernel designs. In a modular design, the kernel needs only the capability to perform required functions and communicate between modules. If more functionality is needed, users can dynamically load modules into the kernel. This allows for well-defined, protected interfaces (like layered systems) and flexibility through module communication.
Part I – Multiple Choice Questions (Continued)
1. Process Control Block (PCB)
A Process Control Block (PCB):
A) Includes information on the process’s state.
B) Stores the address of the next instruction to be processed by a different process.
C) Determines which process is to be executed next.
D) Is an example of a process queue.
2. Degree of Multiprogramming
The _____ refers to the number of processes in memory.
A) Process count
B) Long-term scheduler
C) Degree of multiprogramming
D) CPU scheduler
3. Shared Memory vs. Message Passing
Which of the following statements is true?
A) Shared memory is typically faster than message passing.
B) Message passing is typically faster than shared memory.
C) Message passing is most useful for exchanging large amounts of data.
D) Shared memory is far more common in operating systems than message passing.
4. Non-Blocking Send and Receive
A non-blocking send() and non-blocking receive() is known as a(n):
A) Synchronized message
B) Rendezvous
C) Blocked message
D) Asynchronous message
5. Zombie Process
A process that has terminated, but whose parent has not yet called wait(), is known as a _____ process.
A) Zombie
B) Orphan
C) Terminated
D) Init
Part II – True/False Questions (Continued)
6. The long-term scheduler selects a process that is ready to execute and allocates the CPU to it. (F)
7. Cascading termination refers to terminating all child processes by the operating system after the parent process has terminated. (T)
8. Sockets are identified by an IP address concatenated with a port number and use a client-server architecture. (T)
9. A mailbox is used in direct interprocess communication. (F)
10. In a Remote Procedure Call (RPC), a separate stub exists for each separate remote procedure. (T)
Part III – Short Answer Questions (Continued)
11. Context Switch
Switching the CPU from one process to another requires saving the current process’s state and restoring the state of a different process. This is called a context switch. Context switching is an overhead because the system doesn’t perform useful work while switching.
12. Short-Term vs. Long-Term Scheduling
- Short-term schedulers select which process, among those loaded in main memory, will execute on the CPU.
- Long-term schedulers move jobs from a list of processes in secondary storage into main memory. They are most prevalent in batch systems.
- Medium-term schedulers swap processes into and out of secondary storage to create a mix of processes that can more efficiently use I/O and CPU resources. They are most prevalent in time-sharing systems.
Part I – Multiple Choice Questions (Continued)
1. Thread Pools
What uses an existing thread, rather than creating a new one, to complete a task?
A) Lightweight process
B) Thread pool
C) Scheduler activation
D) Asynchronous procedure call
2. Thread Library
What provides an API for creating and managing threads?
A) Set of system calls
B) Multicore system
C) Thread library
D) Multithreading model
3. Many-to-Many Model
Which multithreading model multiplexes many user-level threads to a smaller or equal number of kernel threads?
A) Many-to-many model
B) Many-to-one model
C) One-to-one model
D) Many-to-some model
4. Upcalls
In multithreaded programs, how does the kernel inform an application about certain events concerning communication between the kernel and the user-thread library?
A) Signal
B) Upcall
C) Event handler
D) Pool
5. Signal Handling in Multithreaded Programs
Which of the following is an acceptable signal handling scheme for a multithreaded program?
A) Deliver the signal to the thread to which the signal applies.
B) Deliver the signal to every thread in the process.
C) Deliver the signal to only certain threads in the process.
D) All of the above
Part II – True/False Questions (Continued)
6. A thread is a unit of CPU utilization. (T)
7. Each thread has its own register set and stack and data. (F)
8. Process creation is heavyweight, while thread creation is lightweight. (T)
9. Task parallelism involves distributing data across multiple computing cores. (F)
10. User thread management is done by the user-level threads library. (T)
Part III – Short Answer Question (Continued)
11. Benefits of Multithreaded Programming
The four major categories of benefits of multithreaded programming are:
- Responsiveness: Allows continued execution if part of a process is blocked, especially important for user interfaces.
- Resource Sharing: Threads share the resources of a process, easier than shared memory or message passing.
- Economy: Cheaper than process creation, thread switching has lower overhead than context switching.
- Scalability: Processes can take advantage of multiprocessor architectures.
Part I – Multiple Choice Questions (Continued)
1. Race Condition
A race condition:
A) Results when several threads try to access the same data concurrently.
B) Results when several threads try to access and modify the same data concurrently.
C) Will result only if the outcome of execution does not depend on the order in which instructions are executed.
D) None of the above
2. Peterson’s Solution
In Peterson’s solution, which variable indicates if a process is ready to enter its critical section?
A) turn
B) lock
C) flag[i]
D) turn[i]
3. Preventing Busy Waiting
What can be used to prevent busy waiting when implementing a semaphore?
A) Spinlocks
B) Waiting queues
C) Mutex lock
D) Allowing the wait() operation to succeed
4. Semaphore
A semaphore:
A) Is essentially an integer variable.
B) Is accessed through only one standard operation.
C) Can be modified simultaneously by multiple threads.
D) Cannot be used to control access to a thread’s critical sections.
Part II – True/False Questions (Continued)
5. The purpose of process synchronization is to prevent data inconsistency. (T)
6. Race conditions are prevented by requiring that critical regions be protected by locks. (T)
7. Three philosophers may eat simultaneously in the Dining Philosophers problem. (F)
8. Busy waiting is associated with mutex locks. (T)
9. A nonpreemptive kernel is safe from race conditions on kernel data structures. (T)
Part III – Short Answer Questions (Continued)
10. Three Conditions for Solving the Critical Section Problem
The three conditions that must be satisfied to solve the critical section problem are:
- Mutual Exclusion: No two processes can be in their critical sections at the same time.
- Progress: If no process is in its critical section, and some processes want to enter their critical sections, only the processes not in their remainder sections can participate in deciding which will enter its critical section next. No process should be postponed indefinitely.
- Bounded Waiting: There exists a limit on the number of times other processes can enter their critical sections after a process has made a request to enter its critical section, and before that request is granted.
11. Dining Philosophers Problem and Operating Systems
The Dining Philosophers problem involves five philosophers sitting at a round table with five chopsticks and a bowl of food. Each philosopher needs two chopsticks to eat. The problem is to design a protocol to prevent deadlock and starvation, ensuring that each philosopher can eventually eat. This problem is analogous to resource allocation in operating systems, where processes (philosophers) need resources (chopsticks) to complete their tasks. The solution involves techniques like resource ordering and avoiding circular dependencies to prevent deadlocks and ensure fair resource allocation.