WORKING SETS AND MEMORY ALLOCATION
Memory Allocation in Multiprogrammed Computers. 1966. This Memo-24 of the Computation Structures Group at MIT introduced the working set model and some statistical models analyzing its fault rate and size as a function of window. It also analyzed several fixed-space replacement policies in use at the time.
Resource Allocation in a Computer Utility. 1967. This Memo-31 of the Computation Structures Group at MIT introduced the notion of system balance, in which the demand for processors and memory is dynamically balanced against supply. It used the working set to measure memory demand. It offered analyses of scheduling algorithms that seek balance.
Square Root Theorem. 1968 and 2020. In my 1968 PhD thesis I proved a theorem that if a memory is equipartitioned n ways (fixed partition multiprogramming), the total memory requirement sqrt(n) larger than if the same memory is dynamically shared (working set governed mulitprogramming). This was not a major result in my thesis and I had nearly forgotten it over the years. But given that someone else was interested in 2020, I wondered if it might be a useful addition to the “working set analytics” paper I was working on. I went back and reviewed the theorem. I discovered that the theorem rests on an unjustifiable assumption and is weaker than I suggested in my thesis. This short essay presents a correct theorem and proof.
Protected Service Routines and Intersphere Communication. 1966. This Memo-20 of the Computation Structures Group at MIT proposed that the notion of protected service routine, which interfaced users to shared resources, should be considered an instance of message exchange between processes in different spheres of protection.
A Statistical Model for Console Behavior in Multiuser Computers. 1966. This Memo-21 of the Computation Structures Group at MIT introduced a model for terminals connected to a centralized computer system. It defined a "virtual terminal" by statistically aggregating real terminals and showed that this approximation did not introduce much error into aggregate statistics such as throughput and buffer use.
Effects of Scheduling on File Memory Operations. 1967. This Memo-26 of the Computation Structures Group at MIT analyzed disk arm and rotating drum scheduling strategies for their throughputs and effects on variance of waiting times. It proved that shortest-seek-time-first and shortest-rotation-time-first gave the best performance.
Low Contention Semaphores and Ready Lists (with Don Dennis and Jeff Brumfield). 1981. Semaphores and ready lists are implemented in the low-level kernel of an operating system. The efficiency of operations on them, notably semaphore signaling and context switching, is critical for efficient operating systems. This paper discusses how to implement those operations efficiently when there are multiple CPUs (cores) available to pick up ready tasks. It also sketches a correctness proof for this part of the kernel.
Protection: Principles and Practice. 1972. (with G. Scott Graham). This paper used Lampson's newly developed access matrix model to discuss the operating system architecture needed to implement that model. It introduced the idea of reference monitor, which became very important in secure computing systems. It also discussed ways to structure permissions in this architecture so that mutually suspicious subsystems could be implemented with minimal risk that either could disrupt the other.
A Simple Proof of the Correspondence Theorem. 1967. Denning and Dennis developed a simple proof of the Post Correspondence Theorem for their course on computational models. When they submitted it to Journal of ACM, they learned that Bob Floyd had already discovered the same idea and had a paper awaiting publication. They did not publish a separate paper. They included this proof in their book Machines, Languages, and Computation (Prentice-Hall, 1978).
An Undergraduate Course on Operating Systems Principles. 1971. PJD chaired Task Force 8 of the NRC Commission on Education's Committee for Computer Science in Engineering (COSINE). The task force consisted of Peter Denning (chair), Jack Dennis, Nico Habermann, Butler Lampson, Richard Muntz, and Dennis Tschritzis. They designed a principles-based course on operating systems, which became the blueprint for the OS course now ubiquitously included in the CS core curriculum. Their map of OS principles covered procedures, processes, memory management, name management, protection, resource allocation, and pragmatic design aspects. These topics can be seen in every OS textbook table of contents. Modern readers may be amazed at how much agreement there was in 1971 on the core principles of operating system and how durable the principles have been.
Operating Systems Principles and Undergraduate Computer Science Curricula. 1972. This paper received the best paper award at the 1972 SJCC. PJD argued that operating systems had developed a set of first principles of sufficient depth to merit a core course in the undergraduate curriculum. He challenged the orthodoxy of the day, which held that the first principles of computing were in mathematics and programming. The systems argument was successful. Operating Systems Principles courses soon began to show up in every curriculum, and OS has been treated as a core subject ever since.
Third Generation Computer Systems. 1971. ACM Computing Surveys (December), 175-216. This paper surveys the common features of third generation operating systems from a general view, with emphasis on the common abstractions that constitute at least the basis for a "theory" of operating systems. Properties of specific systems are not discussed except where examples are useful. The technical aspects of issues and concepts are stressed, the nontechnical aspects mentioned only briefly. A perfunctory knowledge of third generation systems is presumed. This paper was widely used in OS courses inspired by the COSINE report (above) until superseded by the first textbooks based on the report (by Nico Habermann and by Per Brinch Hansen).