Concurrency and Memory Management in Operating Systems

Producer-Consumer Problem (Condition Variable)

pthread_cond_t not_empty_cv = PTHREAD_COND_INITIALIZER;
pthread_cond_t not_full_cv = PTHREAD_COND_INITIALIZER;
static int volatile count = 0;
void *producer(void* arg) {
for(;;) {
pthread_mutex_lock(&countmutex);
while(count == MAXCOUNT) {
pthread_cond_wait(&not_full_cv, &countmutex);
}
count = count + 1;
// Create Item
pthread_cond_signal(&not_empty_cv);
pthread_mutex_unlock(&countmutex);
}
}
void *consumer(void* arg) {
for(;;) {
pthread_mutex_lock(&countmutex);
while(count == 0) {
pthread_cond_wait(&not_empty_cv, &countmutex);
}
count = count – 1;

//Consume Item
pthread_cond_signal(&not_full_cv);
pthread_mutex_unlock(&countmutex);
}

Producer-Consumer Problem (Semaphore)

while (true) {
//produce
wait (empty);
wait (mutex);
// add item to buffer
signal (mutex);
signal (full);
}
while (true) {
wait (full);
wait (mutex);
// remove an item
signal (mutex);
signal (empty);
// consume

Dining Philosophers Problem (Semaphore)

semaphore chopstick[5];
Philosopher i:

do{
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
//eat
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
//think
} while (1)

Dining Philosophers Problem (Monitor)

monitor DP {
enum { THINKING, HUNGRY, EATING} state [5];
condition self [5];

void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {
state[i] = THINKING;
// test neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ; // no effect if not blocked
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

Reader-Writer Problem (Mutex)

Same thing as below, but LOCK on wait and UNLOCK on SIGNAL

Reader-Writer Problem (Semaphore)

while (true) {
wait (wrt) ;
//write
signal (wrt) ;
}
while (true) {
wait (mutex) ;
count ++ ;
if (count == 1) wait (wrt) ;
signal (mutex);
// read
wait (mutex) ;
count – – ;
if (count == 0) signal (wrt) ;
signal (mutex) ;
}

Tutorial: Zhengan Zhao

Markers: Narges Sayah Dehkordi, Mazyar Ghezelji

Effective Memory Access Time (EMAT)

EMAT = h(c+m) + (1-h)(c+2m)

  • h: hit ratio of TLB
  • c: TLB access time
  • m: memory access time

Physical memory address = Frame number * frame size + offset.

  • kilobyte = 2^10
  • megabyte = 2^20
  • gigabyte = 20^30
  • terabyte = 2^40

Number of frames in a machine = total memory size / frame size

Q: Bits needed for displacement into page aka bits needed for offset:

A: Page size exponent in base 2.

Q: Assume it takes 36 bits to index into our virtual memory in terms of bytes?

A: 2^36 entries (page size) [Multiply in base 2 notation]

Swap Space

Swap space size: delta(logical & physical) generally optimizes speed rather than storage

Demand Paging

Store processes we don’t need in the data store, bring them out into main memory when we need them (on demand).

Valid/invalid bit: invalid shows it is invalid, or it is not in memory

Page Fault: when something is not in memory and we need to fetch it.

Effective Access Time (EMAT) = (1 – p)m + p(p_t); p: page fault rate, m: memory access time, p_t: time to recover from page fault.

Page Fault Rate: 0<=p<=1; no faults: 0, everything faults: 1.

Fork vs Vfork

Both create a new process with child and parent.

  • fork: Once the child process is created, both parent and child processes start their execution from the next statement after fork() and both processes get executed simultaneously.
  • vfork: The child process suspends the execution of the parent process until the child process completes its execution as both processes share the same address space.

Thrashing

Thrashing occurs when a computer’s virtual memory is overused, leading to a constant state of paging and page faults. This is caused by programs or workloads that present insufficient locality of reference: if the working set of a program or a workload cannot be effectively held within physical memory.

Page Replacement Algorithms

  • FIFO
  • LRU
  • LRU Approximations

Given a page replacement algorithm, you should be able to calculate the number of page faults (if any) and the sequence of pages uploaded into main memory