American Scientist Publications


Peter J. Denning



Intelligence may not be computable (with Ted Lewis). 2019. A hierarchy of artificial intelligence machines ranked by their learning power shows their abilities -- and their limits.

Bitcoins maybe; Blockchains Likely (with Ted Lewis). 2017. The innovative foundations of the cryptocurrency may outlive the currency itself, as its verification method finds applications everywhere.

Computers that can run backwards (with Ted Lewis). 2017. Reversible computation -- which can, in principle, be performed without giving off heat -- may be the future of computing.

Computational Thinking in Science. 2017. The computer revolution has profoundly affected how we think about science, experimentation, and research.

Cybersecurity is Harder Than Building Bridges (with Dorothy Denning) 2016. Protecting the Internet and online computerized systems from attack is a difficult, messy problem. Here's why.

The Internet After 30 Years. 1997. From Internet Besieged, Addison-Wesley, 1997. An update of an article from American Scientist written in 1989, recording the perceptions in 1989 about the formation and future of the Internet.

Passwords. 1992. From American Scientist, March-April 1992. Reusable passwords are gradually giving way to one-time passwords implemented with smart cards.

Queueing networks. 1991. From American Scientist, May & September 1991. Queueing networks are a simple, powerful, and effective model for congestion, bottlenecks, response time, and throughput of many computing systems and networks.

Modeling Reality. 1990. From American Scientist, November-December 1990. A perspective

Saving All the Bits. 1990. From American Scientist, September-October 1990. In 1990 Big Data meant data sets in the terabytes, much smaller than today's Big Data sets. But the problems were the same: getting enough bandwidth to move data to their storage sites, finding enough storage, and retrieving the data for analysis and visualization. Here we question why we have to save all the bits. Maybe processing them at the time of collection could reduce the amount to be saved.

Bayesian Learning 1989. From American Scientist, May-June 1989. Bayesian Learning is a new form of inference that uses Bayes' Rule from statistics to find the most probable hypothesis based on the data available. A program called Autoclass used Bayesian Learning to infer similarity classes of objects in data from the Infrared Scanning Telescope. Without knowing anything about similarity classes astonomers already assigned for those objects, Autoclass get them all right and found a new class astronomers had missed. Bayesian Learning was new and controversial at the time this was written and has since become a cornerstone of machine learning.

Sparse Distributed Memory. 1989. From American Scientist, July-August 1989. Sparse Distributed Memory is a model invented by Pentti Kanerva for human long term memory. The model, which can be formulated either as a neural network or an associative memory architecture, associates patterns (long bit vectors) with other patterns. It can remember sequences and associations, extract signals from noise, and generate abstractions of related cases. And it can forget.