(1) Do this exam on your own. On the first page of your paper, include the honor pledge and sign it: "I have neither given nor received aid on this exam." Any evidence of copying or other forms of collusion will be grounds for failure of this exam. (2) Be brief in your responses. But justify your answers. Without justification I cannot see what you have in mind and I cannot award partial credit. (3) Prepare your responses with a word processing program. (4) Use these resources (and cite your sources):

A program running on a virtual memory system has a reference string (address trace) as follows:
20 repetitions of the pattern 1 1 1 1 1, then 40 repetitions of the pattern 1 2 3 4 5 1 2 3 4 5, then 20 repetitions of the pattern 1 1 1 1 1, then 40 repetitions of the pattern 1 2 3 4 5 1 2 3 4 5
(1) Interpret this reference string in terms of the locality model for program reference behavior.
(2) Using the stack analysis, calculate the fault function for LRU for memory sizes m = 1,2,3,4,5. (This means to find the stack distance string and use its histogram to calculate the fault counts.)
(3) Using the stack analysis, calculate the fault function for MIN for memory sizes m = 1,2,3,4,5.
(4) Using the stack analysis, calculate the fault function for WS for window sizes T = 18 and 110. Because WS is a variable space policy, you will need to calculate its fault count F(T) and mean memory requirement m(T) separately. You can calculate the fault counts from the histogram of interreference distances and the mean space from the spacetime computation of workingsets.
(5) Plot the three fault curves on the same graph. How is it possible for WS to achieve better performance than MIN?
(X1) (Bonus: 1 extra credit point). Find and plot the fault curve for VMIN, the optimal variablespace policy, and compare with the other policies.
We will consider a simple time sharing system with paged virtual memory. It consists of a set of N users with think time Z and three servers: (1) the CPU, (2) the paging DISK, and (3) the I/O server. These servers are in the central server configuration shown in the "Queueing" slides. (New jobs go to the CPU; after a CPU burst they then go to either of the DISK or I/O; and then they return to the CPU.) You may find it helpful to draw a diagram of this system before going further.
Each user gets a fixed, equal partition of the main memory in which to run jobs. Virtual memory maps job address spaces into these partitions and fields the page faults resulting when the CPU refers to a missing page. We are given these parameters of the system:
D1 = 5 sec S2 = 20 msec (per page swap) S3 = 10 msec (per I/O task) V3 = 10 I/O tasks per job Z = 30 sec M = 100 pages (main memory size)
Experimental measurements have shown that a typical user job has lifetime curve (mean time between page faults) of
L(m) = min(0.5, 0.005*m^{2})i.e., a curve that increases with the square of m with maximum value of 0.5 seconds between page faults.
(1) Plot the lifetime curve and find its knee. What do you expect of system throughput if the number of users (N) is controlled so that each user's partition is at the knee memory size?
(2) Find the faultcount function F(N) = number of page faults in a job when N users are allowed in the system. Show how to use this function to determine the visit ratio V2 for the DISK and the DISKimposed upper bound on throughput.
(3) Set up the iterative MVA equations for this system. (These are the equations that take give values of the parameters and iteratively settle into a solution for a given value of N. If you want solutions for multiple values of N, you run the iterative MVA for each value. Do not confuse with the recursive MVA, which computes a table of solutions in which the solutions for N are derived from those for N1.) With a spreadsheet, compute system throughput X(N) and response time R(N) for N=1,2,...,30. Show your computed data for X(N) and R(N).
(4) Plot the curve X(N), the CPUimposed throughput upper bound, and the DISKimposed throughput upper bound on the same graph. To make this graph readable, discard any DISK upper bound points that exceed CPU upper bound points by more than 50%. Show that the I/O serverimposed upper bound is not a factor can can be ignored in this plot.
(5) What is the optimal level of multiprogramming? How close is the system throughput to its upper bound at this point? What do you now say thrashing is?
(X2) (Bonus: 1 extra credit point). What is the relation of the optimum throughput you observed above to the throughput that would be generated if all user partitions were at the knee size? What is the relation of this optimum to the balance point at which D1=D2? Are the knee criterion and the balance criterion equivalent?