Last Updated: 2017-11-18 Sat 11:21

CSCI 4061: Exam 2 Review Problems

Table of Contents

1 Concurrent Printer

1.1 Printer Manager Process

A printer is a shared resource in that there is usually only one attached to a computing system that many users/processes can access.

Consider managing the printer using a "daemon" (background process). Programs that want to print do not so directly but instead contact the daemon to request data be printed. The printer daemon must get data from client processes, feed it to the printer, notify the client process of the completion of the printing, and then pick up the next request if any.

Describe a simple protocol that could accomplish this by specifying the following:

  • What mechanism will you use to allow the printer daemon process to communicate with a client?
  • How will client processes communicate data to the daemon? Keep in mind that print files have a variety of types and sizes
  • How will the daemon process notify client processes of the completion of the print job or errors that occurred?
  • Keep in mind that the printer daemon is a system process started when the computer boots.

1.2 Printer Cooperation

As an alternative to the above daemon approach to managing a printer, consider a protocol that DOES NOT involve a manager. In this setting, the printer is usable directly by any process. However, processes that print must obey a protocol you will design to avoid problems. The printer is associated with a known special file /dev/printer. When data is written to this file, it is sent directly to the printer. However, if multiple processes write data simultaneously, the printing results are likely to become corrupted.

Design a protocol that would allow the shared printer resource to be managed without a manager process.

  • What mechanism would you use for processes want to print to coordinate using the /dev/printer file?
  • Is it possible in your scheme to guarantee fairness: processes that are ready to print early actually get to print earlier than later print attempts?

2 Graceful Shutdown

Many system programs run for a long time accumulating data from files that should be written out before shutting down. Occasionally these "daemon" processes need to be closed down or restarted but should be exited "gracefully" as in given an opportunity to write essential data to files before exiting.

Describe a common mechanism in Unix to notify a program that it should terminate but give it an opportunity to write essential data prior to exiting. Describe the specific program control structures in the program and system calls involved in such a graceful shutdown.

3 Critical Section

A database manger process is responsible for occasionally writing database information in main memory to a file so that it is backed up in the case of a crash. A system administrator can request an immediate backup by sending the SIGUSR1 signal to the manager process. Backup requests are considered fairly low priority and can be honored later.

The manager process is also responsible for handling certain operations that are critical to integrity of the database and should not be interrupted by backup requests.

Describe whether it is possible for the manager process to be written to fulfill both of these requirements. Mention how it might be done using Unix programming facilities.

4 Concurrency and Signals

A simple protocol for communication between Process A and Process B involves the following steps.

Process A

  • A1: Process A writes its PID to Pipe X
  • A2: Process A sleeps waiting for a signal
  • A3: Process A wakes up from a signal
  • A4: Process A reads an integer from Pipe Y
  • B1: Process B reads a PID from Pipe X
  • B2: Process B writes an integer to Pipe Y
  • B3: Process B signals Process A to wake up and continue

4.1 Code Outline

Give an outline of C code for Process A and Process B which would (roughly) implement the above protocol. Below

// PROCESS A
int pipeX[2];  
int pipeY[2];
int receive;  // put data "received" from Process B in this integer

// assume both pipes opened appropriately 

// A1: Process A writes its PID to Pipe X

// A2: Process A sleeps waiting for a signal

// A3: Process A wakes up from a signal 

// A4: Process A reads an integer from Pipe Y
// PROCESS B
int pipeX[2];  
int pipeY[2];
int send = 42;  // integer to "send" to Process A

// assume both pipes opened appropriately 

// B1: Process B reads a PID from Pipe X

// B2: Process B writes an integer to Pipe Y

// B3: Process B signals Process A to wake up and continue

4.2 Implementation Flaws

A naive implementation of the basic protocol outlined is likely to contain flaws. For example, if Process A merely calls

sleep(0);

to await a signal, it may result in Process A never waking up.

Describe a sequence of events using the steps A1, A2,.. B1, B2 that could give rise to the this situation. Also describe how one might fix this with a slightly more complex sleeping/waiting behavior and signals handling.

5 Read vs Select

The code below appeared in a lab in files that compared only reading from two separate files (pipes in this case) versus using the select() system call. Describe the difference between these blocks.

  • What pattern of reading do you expect in each? Ex: Do you expect fixed patterns like ABAB.. or ABBABB... or arbitrary intermingling like ABBABAABBA...?
  • Which approach (read()-only or select()/read()) is appropriate if input is coming at very different rates in pipeA versus pipeB? Justify your answer.
// file read_AB.c
  while(!signaled){                                                
    char buf[1024];
    int n;

    n = read(pipeA[PREAD], buf, 1024);                             
    buf[n] = '\0';
    fprintf(stdout,"A had: |%s|\n",buf);

    n = read(pipeB[PREAD], buf, 1024);                             
    buf[n] = '\0';
    fprintf(stdout,"B had: |%s|\n",buf);
  }
// file select_AB.c
  while(!signaled){                                                
    fd_set read_set;                                               
    FD_ZERO(&read_set);                                            
    FD_SET(pipeA[PREAD], &read_set);                               
    FD_SET(pipeB[PREAD], &read_set);
    int maxfd = pipeA[PREAD];                                      
    maxfd = (maxfd < pipeB[PREAD]) ? pipeB[PREAD] : maxfd;

    select(maxfd+1, &read_set, NULL, NULL, NULL);                  

    char buf[1024];
    int n;
    if(FD_ISSET(pipeA[PREAD], &read_set)){                         
      n = read(pipeA[PREAD], buf, 1024);                           
      buf[n] = '\0';
      fprintf(stdout,"A had: |%s|\n",buf);
    }
    if(FD_ISSET(pipeB[PREAD], &read_set)){                         
      n = read(pipeB[PREAD], buf, 1024);                           
      buf[n] = '\0';
      fprintf(stdout,"B had: |%s|\n",buf);                         
    }
  }

6 Busy Polling versus Interrupt Driven Mutex

Cee Lohacker is working on a drone with a small embedded processor and Unix which provides POSIX threads and mutexes. Unfortunately, the drone is manufactured in Easternopia and Cee cannot understand the Easternopian dialect of the documentation. She needs know whether mutexes on this system use

  • Busy polling to acquire a lock
  • Interrupt driven waiting acquire a lock

Writes the following short C program to test the mutexes and runs it with the time command.

 1: > cat mut_test.c
 2: #include <pthread.h>
 3: #include <stdio.h>
 4: #include <unistd.h>
 5: 
 6: int glob = 1;
 7: pthread_mutex_t glob_lock;
 8: 
 9: void *doit(void *param){
10:   pthread_mutex_lock(&glob_lock);
11:   glob = glob*2;
12:   sleep(2);
13:   pthread_mutex_unlock(&glob_lock);
14:   return NULL;
15: }
16: 
17: int main(){
18:   pthread_mutex_init(&glob_lock, NULL);
19:   printf("BEFORE glob: %d\n",glob);
20:   pthread_t threads[3];
21:   for(int i=0; i<3; i++){
22:     pthread_create(&threads[i], NULL, doit, NULL);
23:   }
24:   for(int i=0; i<3; i++){
25:     pthread_join(threads[i], (void **) NULL); 
26:   }
27:   printf("AFTER glob: %d\n",glob);
28:   pthread_mutex_destroy(&glob_lock);
29:   return 0;
30: }
31: 
32: > gcc mut_test.c -lpthread
33: 
34: > time a.out
35: BEFORE glob: 1
36: AFTER glob: ???   # glob final value
37: 
38: real	???       # Wall time
39: user	???       # User CPU time
40: sys	???

Answer the following questions

  1. What is the expected value for glob after the code is run (marked # glob final value)?
  2. Assuming the mutexes use BUSY polling, what times are expected for
    • Real/Wall time?
    • User CPU time?

    Explain your reasoning.

  3. Assuming the mutexes use INTERRUPT-DRIVEN waiting, what times are expected for
    • Real/Wall time?
    • User CPU time?

    Explain your reasoning.

7 Processes vs Threads

7.1 Similarities and Differences

Outline the similarities and differences between Processes and Threads. Describe some details of the assumptions that each makes about what memory is shared versus private. Describe the tradeoffs associated with using one versus the others on issues such as system tools, security, and efficiency.

7.2 Thread Pitfalls

Recall the Pi calculation program discussed in class. One version of this program spawned multiple threads to run the following worker function.

int total_hits = 0;
int points_per_thread = -1;

void *compute_pi(void *arg){
  long thread_id = (long) arg;
  unsigned int rstate = 123456789 * thread_id;
  for (int i = 0; i < points_per_thread; i++) { 
    double x = ((double) rand_r(&rstate)) / ((double) RAND_MAX);
    double y = ((double) rand_r(&rstate)) / ((double) RAND_MAX);
    if (x*x + y*y <= 1.0){
      total_hits++;
    }
  } 
  return NULL;
}
  1. Why was it necessary to use the rand_r() function rather than the standard rand() function to generate random numbers?
  2. Do all the worker threads generate identical random sequences of numbers or are they different? Explain your reasoning.
  3. It was determined in discussion that this version always underestimates the value of Pi. Explain why and how one might fix this while also maintaining the efficiency of using multiple threads to get speedup.

Author: Chris Kauffman (kauffman@phaedrus)
Date: 2017-11-18 Sat 11:21