The scaling of information technology has been an active area of inquiry from the outset of its commercial development. Pioneering studies including those of [Moore, 79] and [Keyes, 87] identified the possibility of sustained exponential improvements in key semiconductor device parameters, as well as the significant obstacles that would need to be overcome to maintain such a pace. The enormous intellectual and financial investment into that effort has translated into steady improvements in overall system performance, so that now more and more computer applications are less and less resource-bound. But what's typically been missed in forecasts such as the Semiconductor Industry Association's influential roadmap is a projection of the relevance of the whole scaling effort. A narrow focus on improving device performance ignores the importance of the context in which computers are used, which is leading to very real scaling limits that are among the most serious obstacles to further progress. These include the economics of producing both chips and chip fabs, and the difficulty of designing and managing very large-scale systems. Beyond their practical significance, these issues present some of the most profound research questions in all of information technology, but they are questions that crucially cut across traditional discipline boundaries. Most importantly, it is no longer possible to maintain the fiction that developing hardware can be neatly separated from developing software.
The Center for Bits and Atoms is an ambitious attempt to close this historical divide by bringing together the resources required to simultaneously study the content of information and its physical properties, on length scales from atomic nuclei to global networks. It aims to develop architectures for scaling information technology appropriate to each of these levels of description, and through a network of partnerships deploy these capabilities for the greatest global impact. Along the way, it seeks to fundamentally revisit the notion of what is a computer, and what is a computation. The CBA's program is based on the belief that the most significant of all the obstacles to progress has been the isolation of the investigation of each these pieces from that of the larger whole that they promise to enable.
The research agenda is organized into three layers, in order of accessibility and importance. The first of these addresses system-level questions, asking how to extend networks of (relatively) conventional processors up to and beyond billions of interacting entities. Such coming complexity is being driven by countless practical applications, but will break the existing protocols used to operate the Internet as well as the techniques used for managing it. The approach taken here will be to "de-layer" the divisions between physical transport, logical connection, and application implementation, so that when devices are connected they simultaneously create a network, a distributed data structure, and the computer to manipulate it. The algorithms for processing and routing information are crucially assembled as the components are assembled, and autonomously adapt as nodes come and go, so that scalability is literally built in as the system grows. De-layering also beneficially exposes the capabilities of low-level devices to high-level applications (and vice versa), so that rich interfaces such as sensor networks can become the norm rather than the exception.
The second layer builds on this system-level insight to ask about technologies to meet the demand for embedding billions of computers into everyday objects. Even though the cost per transistor has fallen exponentially for decades, the minimum cost per packaged part has remained relatively unchanged over the whole VLSI scaling era. For such large-scale systems to be compatible with the global GDP, it's necessary to fundamentally rethink the nature of device fabrication. The approach in the CBA will be to seek to eliminate central chip fabs entirely, using table-top printing technologies to move the production of computers to where and when they are needed. The fundamental enabling insight that makes this possible is the use of nanocrystalline electronically-active inks. Not only does this promise to dramatically reduce the cost per part, it offers a route from mass-production to the customization of the design of computers, as well as a way to grow from 2D to 3D architectures.
The third (and most speculative) layer asks about the fundamental mechanisms for manipulating information that will be enabled by this agenda. It seeks to apply the insights that will be developed into programming enormous imperfect distributed systems and accessibly fabricating nanoscale structures in order to harness the intrinsic computational capabilities of natural systems. Fundamental to this approach is the conviction that progress towards these long-standing goals has been more limited by lack of insight into appropriate computational models than by a lack of experimental candidates; the research will build on encouraging early work on manipulating the dynamics of molecular systems.
This program will be grounded in two ways. First, by working with partners to apply the results (starting with the expected early insights into deploying and managing networks of ultra-lightweight processors) to compelling applications of computing that have been beyond the reach of traditional computers. And second, by developing an instructional program to help educate a generation that can reason across the traditional hardware/software boundary, and can program systems whose complex behavior emerges from the interaction of many simple elements.