Great Principles:
Frequently Asked Questions


Peter J. Denning, Investigator

6/18/07



Principles
Meaning of principles?
Stories, statements, or both?
How to start a principles list?
Origin of term computing mechanics?
Completeness of the seven windows?
Relation to CC2001 Body of Knowledge?

Practices
Why bring up practices?
Too much engineering emphasis for a science?
Teaching computing practices?
Too much focus on practice already?
Ladder of competence?
Scientific method?

Programming
Programming not the central core practice?
Programs as executable notations?
Eliminate programming!

General
Great principles library?
Historical sensibility?
Limits of a new definition?
Media perception?
Too retrospective?

Principles

What do you mean by principles?  What are examples?

We are appealing to the two main definitions of principles: one is a basic truth, law, recurrence, or assumption; the other is a rule or standard of conduct.  In physics, Newton's law, F=ma, or in electrical engineering, Ohm's law, E=IR, are examples of the first meaning.  In ethics, the Golden Rule is an example of the second meaning.  So it is with computing.  An example of a computing law is that every computable function can be expressed as a Turing Machine program.  An example of a computing standard of conduct is the convention of decomposing large programs into simple modules.

You said that principle-stories are more important than lists of principle-statements.  What's wrong with statements?

Many clusters of principles are frequently grouped under a single, larger, more abstract principle.  The Turing Machine is an excellent example.  Alan Turing introduced it in 1936 as an abstract model of computation; it was not a model of any real computer but of the fundamental operations in every computer.  Many individual statements of principle have been studied under the heading of Turing Machines: for example, every algorithm can be represented as a Turing Machine program; a universal Turing Machine can simulate any other; some algorithms require order of 2n steps for an input of size n; some problems have no algorithm that runs faster than 2n steps; and some problems have no algorithm at all.  Sometimes we are interested in these lower-level, more concrete statements of principle.  But sometimes we are also interested in the story that connects them all under a single heading.  Thus the framework offers both principle-statements and principle-stories.

Let's consider a computing example outside the formal computation area. When we build computer systems, we create many computational agents that perform computation tasks for us. We can observe what these agents are paying attention to by monitoring the memory locations addressed by the agent over time. The principle of locality says that these memory accesses will cluster in small subsets of memory objects over extended periods of time. We can take advantage of locality behavior by keeping copies of those subsets in a small, fast memory called a cache. Locality is why virtual memories work, why systems and networks avoid thrashing, why buffers speed performance, why search engines are so fast, and why edge servers improve Internet performance so remarkably. Locality derives from human cognitive behaviors such as attention focus and divide-and-conquer problem solving. Locality cannot be explained in a single sentence, but it can be understood from a story.

An example outside computing is the black hole studied in Astronomy.  Most astronomy textbooks have a chapter on black holes. That chapter has subheadings for event horizon, radiation, and gravitational lens.  The black hole story includes principle-statements about these aspects.

Principle-stories are powerful aids to memory and visualization.  They make complex areas seem simple.  They tell history, showing how the principle evolved and grew in acceptance over time.  They name the main contributors.  They chronicle feats of heroes and failures of knaves.  They lay out obstructions and the struggles to overcome them.  They explain how the principle works and how it affects everything else. The game is to define many terms in terms of a few terms and to logically derive many statements from a few statements.

A list of principle-statements has to refer to a set of principles-stories before they can be fully understood.

I'm having trouble constructing a list of principles.  What do you have in mind?

The Great Principles web site contains a preliminary list of principles. The short, top-level version simply lists the principles in each of the seven categories (view it). The longer version offers in each category the top-level principles plus more detailed, clarifying explanations (view it). The site also offers a set of narrative stories overviewing each of the seven categories (view it).

What is the term "computing mechanics" that you use occasionally ?

Astronomy, thermodynamics, and physics use the term mechanics for the part of their fields dealing with the laws of behavior and structure of components.  For example, Celestial Mechanics deals with the motions of heavenly bodies; Statistical Mechanics with the macro behavior of physical systems comprising large numbers of small particles; Quantum Mechanics with wave behaviors of subatomic particles; and Rigid-Body Mechanics with the balance of forces within and between connected objects.  The same idea applies for computing.

Computing mechanics deals with the fundamental laws, recurrences, invariances, and cause-and-effect relationships in computing.  Computing mechanics is concerned with how and why things work.

The term "mechanics" emphasizes the parallel between the part of computing that deals with fundamental laws and the corresponding parts of other fields.  Just as astronomy, thermodynamics, and physics have components called mechanics, so does computing.

It is interesting that this use of "mechanics" comes from science, not engineering.  Its applicability lends credence to the word "science" in our title.

That said, there is nothing sacred about the term mechanics.  We could just as well have labeled this part of computing principles "fundamental laws".

How do you know that the seven categories of computing principles are complete?

We subdivided computing mechanics into seven areas: computation, communication, coordination, recollection, automation, evaluation, and design.  These categories are not mutually exclusive. They are like seven windows into the same room. Each window sees the contents of the room in a distinctive way.  Some elements of the room are visible through multiple windows.  However, the windows do not partition the contents of the room into seven disjoint subsets.  For example, a network protocol at times appears as a way of coordinating, at times a way of communication, and at times a way of recollecting.

We believe these categories are complete, not as a formal proof but as a hypothesis about the field. Our main reason for believing the hypothesis is functional.  Imagine the block diagram of a typical computer.  It consists of a CPU (central processing unit), a memory subsystem, and an I/O subsystem.  The CPU corresponds to the computation function; the memory to the recollection function; and the I/O to the communication function.  Now observe that computers are almost always interconnected in some way; the network corresponds to the coordination function.  Deciding what tasks can be delegated to the network corresponds to the automation function.  Figuring out whether the network delivers its responses in a timely way corresponds to the evaluation function.  And organizing the system so that it is both correct and performs well corresponds to the design function.  Thus it appears that the principal functions of computing systems are the same as the seven windows.

We also have an empirical reason to believe the hypothesis.  We took the list of 30 core technology areas and examined each for the role that the seven windows play.  We found that all seven play a role in each and every core technology.  We found no aspect of any technology that was not covered in this way.

Why didn't you use the subdivisions of the field proposed in the Curriculum 2001 Body of Knowledge?

The CC2001 report, Appendix A (view), is a summary of the Body of Knowledge for the curriculum recommendations.  It lists 14 main topic areas covering 130 subareas:

Discrete structures
Programming fundamentals
Algorithms and complexity
Architecture and organization
Operating systems
Net-centric computing
Programming languages
Human-computer interaction
Graphics and visual computing
Intelligent systems
Information management
Social and professional issues
Software engineering
Computational Science and numerical methods

The 1989 report Computing as a Discipline (view) listed 9 main areas:

Algorithms and data structures
Programming Languages
Architecture
Numerical and symbolic computation
Operating systems
Software methodology and engineering
Databases and information retrieval
Artificial intelligence and robotics
Human-computer interaction

Note that the 2001 list is a refinement of the 1989 list.  The 1989 database topic becomes information management in 2001; the 1989 operating systems topic splits into separate operating systems and networking topics in 2001; the 1989 algorithms topic splits into separate algorithms and discrete structures topics in 2001; and social issues becomes an explicit topic in 2001.

These subdivisions are mostly technology centered.  That means they pick major areas of computing technology and set forth principles and practices in each one.

The Great Principles framework goes in a different direction. We chose instead a smaller set of subdivisions that can be defended on their own merits as fundamental functional areas of computing.  We want a framework for the computing field that does not seem to depend on the existence of certain technologies. Over time, the set of principles may change, but likely not as fast as the technologies.

The Great Grinciples framework and technology-topics list are actually alternative views of the same computing field.  Imagine a matrix with rows corresponding to the CC2001 topic areas, and columns corresponding to the seven categories.  All the principles behind the topics listed in 2001 can be distributed into the boxes of this matrix -- for example the coordination principles of security or the design principles of virtual memory.  In this sense, the "technology oriented" view of the field is a horizontal view, and the "principles oriented" view is a vertical view.  They both see the same field but interpret it differently.  We wrote a companion document discussing this at greater length (view).

Practices

Why bring up practices in a principles framework?

Three reasons.  First and foremost, a substantial part of computing knowledge is embodied in the skills of computing people.  We call it know-how, in contrast to the descriptive "know-what" of principles.  This knowledge is passed on by apprenticeship, practice, and engagement with fellow professionals and with customers.

Second, everyone recognizes that practices and principles are not the same.  This is most visible in the common phrase, "principles versus practice."  A great principles framework needs to say what the principles are and what they are not; in particular, they are not practices.  However, we do not hold principles and practices in opposition as suggested by the word "versus"; we hold them as two distinct dimensions of the knowledge space of computing.

Third, notwithstanding the statements above, many computing people do not see practices as a form of knowledge.  They see practices as the dynamic application of principles whose representations are stored in the brain.  We hold that practices are a distinct form of knowledge from principles.  They are learned through embodiment from doing, not through study of descriptions; from experience in action, not from reflective thought.

In sum, we believe that no picture of the computing field can be complete without a place for this kind of knowledge.

If practices are so important, then isn't computing an engineering field?  Which is it, science or engineering?

Both. The science side emphasizes computing principles, the engineering the practice of producing useful computing artifacts. But in reality the distinction is more blurred than this.

Our definition of practices -- embodied knowledge -- is not specific to engineering.  Mathematics and science have their distinctive practices as well.  Mathematicians and scientists form their own communities of practice.  They have their own understandings of what is competent and incompetent performance.  By giving practices a place in the framework, we are not overemphasizing engineering.  Quite the opposite: we are expanding the space of understanding of computing practice to include science.

In the 1989 ACM report, Computing as a Discipline, we noted that the three processes of theory, abstraction, and design are intricately interwoven into computing.  These three processes are inheritances respectively from mathematics, science, and engineering.  Although people are less concerned today about these historical roots, the roots are real.  It is not our intention to emphasize any one of the three over the other two.

Are you proposing to organize a "computing practices" curriculum?

We have proposed to recognize practices in the framework because they are a distinct form of knowledge and the framework would be incomplete without them.

In most current curricula, little distinction is made between a fundamental principle and a practice; students don't appreciate the difference.  Many people think that practices are the applications of principles, and therefore a grounding in principles is necessary for effective practice. In reality, there are competent practitioners who cannot say what principles they use, and there are competent intellectuals who can't build software well. Most of our curricula do not offer a good balance of principles and practice.

Our recommendation for a "computing practices" curriculum is a tactic to create a place for learning practices, a place where students can learn to be competent at programming, systems, evaluating, and innovating.

However, it is not a requirement of the framework that curricula include a Computing Practices track.  The framework simply distinguishes the two kinds of knowledge, principles and practice, as two equally important dimensions of the computing knowledge space.

A common complaint is that the typical computer science curriculum is already organized around practices, especially programming.  Does not your proposal to distinguish practices worsen the problem?

As noted earlier, the standard curriculum does not appreciate the difference between principles and practice.  We hope to sharpen the distinction and enable a balance to be achieved.

However, a bigger problem is that opinion varies considerably among computer scientists themselves as to the amount of practices already covered in a typical curriculum.  Some believe that the curriculum is so heavily practice oriented that students get inadequate opportunities to learn the principles.  Others believe that the curriculum does not contain enough emphasis on practice and hence graduates are not ready to join the workforce as fully productive members.

At the very least, this project will allow for a debate on the question of what we mean by principles and practices and how a department can balance the two in its curriculum, according to its educational objectives.

Why is the ladder of competence part of the discussion?  This puts too much emphasis on practice and invites us to consider certification.

A ladder of competence arises in every community of practice.  It means that practitioners exhibit different levels of skill that are recognized within the community.  Most communities recognize at least seven distinct levels -- beginner, advanced beginner, competent, proficient, expert, virtuoso, master, and legend. Most professionals can name people in their communities at each of the levels.

The notion of a ladder is important because it affects how we teach computing.  As they become more competent, we give our students greater challenges, always with the intention of helping them achieve the next higher level.  Therefore, part of our work in designing curricula must be to specify the criteria for each rung of the ladder in a specialty and to fold these criteria into our learning objectives for students.  In undergraduate curricula, we will focus primarily on beginners and advanced beginners; and in graduate curricula, competent and proficient professionals.

It is possible that professional societies could offer independent certifications of different levels of skill in some computing specialties.  The IEEE Computer Society already does this for software engineering, and the British Computer Society does it in many areas.  Such services offer individual professionals an authoritative credential testifying to their level of skill and attainment.  Many professionals cite such credentials in their resumes.

Some local jurisdictions in some states may offer state recognition of a professional skill; the PE (professional engineer) is a common example.

Although professional societies may offer certification services and states may offer professional engineer certifications, these activities are independent of a Great Principles framework.  A ladder of competence appears in the teaching of computing because it is a fundamental reality of all communities of practice and it affects our teaching and learning objectives.

Where is the scientific method of investigation covered?

The 1989 ACM report, Computing as a Discipline (view), noted that the computing field is built around three important processes, inherited respectively from mathematics, science, and engineering.  The processes were called theory, abstraction, and design.  The theory process reflects the practice of mathematicians to define mathematical objects and their relationships, and then prove propositions about them.  The abstraction process reflects the practice of scientists who define models of physical processes and validate their predictions.  The design process reflects the practice of engineers who specify, implement, and test systems.  Computing, of course, has evolved its own blends of these three processes.

The process of scientific investigation, in the opinion of many, has received short shrift in many computing departments.  In some parts of the field the process is widely followed -- for example, in computational science, bioinformatics, simulations of physical processes, and performance analysis.  In other parts of the field there is very little use made of experimental methods as ways to understand complex systems.  There has been an ongoing debate about the extent to which "experimental computer science" is, or should be, part of the study of the field.

In the great principles framework proposed here, we have identified the scientific method as an important process and held a place for it under the heading of "modeling practices".  We believe that every practicing computing professional should understand how to design experiments, present and visualize data, predict performance, and set up simulations.

Programming

How can programming not be the central practice of computing?  Computers need programs to run.

This issue is discussed at some length in The Field of Programmers Myth (view) and again in Computing is a Natural Science (view).

The tradition of computer science holds that computation is the behavior of a computer under the control of a program. It follows from this that creating programs must be the central practice of computing.

But if you accept the idea that computation is the principle and computers are the tool, you see there are many important questions that do not involve programming. For example, no one "programmed" the information processes that read DNA and build new cells; yet understanding and influencing these processes is the central question in Biology.

Long before we proposed the Great Principles framework, many leading computer scientists challenged the traditional view as too narrow and incomplete.  They cited numerous examples of computer scientists doing their jobs without "programming".  For example, real systems have requirements so complex that it is impossible to tell if the specifications of inputs and outputs are complete or accurate and to give any formal proof of correctness.  Many software developers don't program at all; they assemble and link parts from software libraries created by others.  Many computing practitioners design systems and architectures and hire programmers to do the coding.

These real problems are the motivation for software engineering.  Software engineers believe that the traditional notion behind programming -- that programs implement mathematical functions -- cannot cope with the complexity and fuzziness of requirements in real, large, interactive, and safety critical applications.  They believe that software development relies on engineering processes to translate complex requirements into working systems, to deal with fuzzy and shifting requirements, to assess and manage risk, to systematize the process of locating and eradicating errors, to organize and manage teams of programmers, and to satisfy customers.

In the Great Principles framework, we argue that computing professionals need skills at four core practices -- programming, systems, modeling, and innovating.  Programming is not enough. If they lack skill in any one they are likely to be seen as not a ful fledged professional.  Programming is important, but it a peer with the other three practices.

Isn't it true that programming is to computing as equations are to mathematics?  Programs are executable notation for describing algorithms and producing the products of computing.  How can you say that programming is not THE single most important, defining practice of computing?

In 1989, Edsger Dijkstra debated with several contemporaries, defending exactly this claim: that programming is the core of computing (view). His critics rejected his argument.  They thought this approach was too limiting.  They advocated a systems approach.

I certainly agree that we can try to define programming in a broader sense so that it encompasses all we do.  This is what Dijkstra sought to do.  The problem is, the harder we've tried this, the more ingrained the outside perception that computing = programming has become.  Outsiders understand programming in the narrow sense we would call coding and not in the wider sense that we have in mind.  That outside perception is now proving to be costly and we really do need to try something different.

What about the importance of programming as notation?  I absolutely agree, a programming language is notation for algorithms, just as algebra is notation for math equations.  But, as we know from Kurt Godel as well as experience, there is no single universal notation.  There are many programming languages, each representing a different way of thinking about design and problem-solving. A competent computing professional needs to be fluent in multiple languages and to be able to select the right one for the design problem at hand.

Indeed, each area of computing -- computation, communication, coordination, recollection, automation, evaluation, design -- has developed notations suitable for what it does.  When I talk about coordination, I use notations for action loops and flow dependencies, notations that are probably not used elsewhere in computer science.  When I talk about operating systems, I use virtual machine hierarchy notations.  When I talk about performance prediction, I use queueing notations.  When I talk about databases, I use relational notations.  When I talk about distributed computation, I use communicating state-machine notations.  And so on it goes.  We are creatures of language.  We have language and notations for every area of action and practice.  Many of these notations are purely descriptive and not executable.  It's when we delegate actions to computing machines that we become interested in "executable" notations.

Thus many of the notations we use are not executable and don't have to be.

You don't go far enough.  In addition to "programming lab" let's have laboratories in each of the important areas of computing.

This is a design issue. Some departments might want to consider such an organization to achieve their learning objectives. The Great Principles framework is not intended to offer advice in the design of individual curricula. We think that individual departments should be as creative and innovative as possible in their approaches to teaching computing.

We definitely do not recommend eliminating programming (as lab practice) for beginners.  Computing people have become highly skilled at designing and manipulating abstractions; but few of us appreciate how abstract computing seems to outsiders.  Programming gives us the means to connect our private world of abstraction with the public world of actions.  Programs put abstractions to action.  It is important that even our beginners appreciate this.

General Issues

What would be in a great principles library?  Doesn't the Internet already serve that purpose?

The purpose of a Great Principles Library (GPL) would be to assemble and maintain a collection of high-quality materials, structured by a great principles framework, that document and teach the fundamental principles of computing. We have discussed the organization of a GPL in a companion document (view).

For example, the library would implement database views corresonding to the seven categories of principles, the four core practices, and the core technologies.  Each view of the library would offer tutorial materials for beginners, intermediate, and advanced practitioners; seminal papers; historical summaries of the evolution of principles and practices; stories of great innovations and inventions; and multimedia materials supporting the above.

Since the body of principles is not static, the library would be managed by a board of editors who would keep the library up to date by incorporating new principles and occasionally demoting old principles to lower levels of importance.  They would commission new items and cross-reference to existing published items for those who want to branch out.  They would purposely strive to be small and selective, showing off the very best material in the field.

The GPL is not intended to support a course or a set of courses.  It is intended both as a dynamic representation of the body of knowledge of the field and as a tool for exploring the field.  Its ability to display links and connections among technologies and principles, from computing and from related fields, would support new discoveries and innovations.

Much of the material visible in the GPL already exists in the Internet. For example, seminal papers are available from the ACM Digital Library and many good introductory articles are in Wikipedia. The GPL will link to all these materials. New materials commissioned for the GPL would be linked from the ACM Digital Library. The important new function for the GPL is its role as a portal and a tool for discovery.

Although one can certainly do Internet searches and find items about fundamental principles, the results of the searches are likely to be incomplete and to be overwhelmed with many low-quality items.  The GPL would provide a high-quality, authoritative source for Great Principle materials.

How does this approach develop an historical sensibility about computing?

To be taken as a great principle, a statement must be universal, invariant, unavoidable, and recurrent.  To establish that a principle meets these criteria, we need to trace the principle's history.  A sure sign of a strong pedigree is that a principle is independently rediscovered by different groups.  Thus a historical sensibility is vital to appreciate and learn in a great principles framework.

The vast majority of research papers seldom cite literature more than a decade old.  The authors have limited themselves to a recent time horizon and discount the contributions from the more distant past.  Many young people are openly skeptical that any result more than five or ten years old is still relevant.  Few appreciate the fundamental work done by the pioneers of the field, which they are only rediscovering now.

Lacking an historical sensibility, computing people are more likely to repeat past mistakes and to appreciate what is fundamental and pervasive about the principles.

Simply offering a new definition of the field won't solve the big problems facing the field.

We agree completely.  The technology and programming view is so deeply ingrained, it will take a lot of work to open our thinking and practices to an alternative.  A great principles framework can be nothing more than a roadmap of the field.  But a map is vital.   Without a map, the journey is impossible.

As part of your motivation, you mention media perception of computing.  Why should we be driven by media perceptions?  We are a science!

It would be nice if the media portrayed our field in a better light.  My purpose in proposing a great principles framework is not to appease the media.  It is to help us organize our practices and our stories differently.  The media only say what they learn from talking with us and watching us.  If we don't like what they say, we should ask what we are doing to inspire their stories.  We can't change the media, but we can change our behavior.  Once we do that, the media will report us differently.

The framework seems to be backward looking.  Only old established principles can make it, even if they are not used today.  New people won't be attracted to the field unless they can discover new principles.

A great principles approach is a new way of thinking for our field.  It can enrich and extend our traditional ways.  Behind the complex arrangements of technology that so often confront us, we can see the guiding and constraining principles.  Many seemingly different technologies are connected by the same principles. Seeing connections with other technologies based on similar principles thus opens the door for new discoveries.

The principles we work with today were discovered as past generations of computing people grappled with complex, seemingly intractable problems -- for example, noncomputability, code breaking, ballistic orbital calculation, thrashing, timing bugs in parallel systems, secret communication in open Internet, fast-enough algorithms for common problems, information sharing, and data compression.  They discovered principles that led to solutions of these problems.  But the process of discovery is hardly over.  We grapple today with a contemporary array of seemingly intractable problems, and we will surely discover new principles that will enable their solutions -- for example, user interfaces, identity theft, securing networks against attack, spam, information overload, dependable software, hastily formed networks, distance learning, and discovering terrorist plots beforehand.  Over time, many of the older principles may fade from importance while the newer ones move into the spotlight.

Thus a great principles approach to the field does not foreclose the discovery of new principles; it encourages discovery.