After decades of reliable improvement, processor speeds have flattened; for the foreseeable future, computers will add processing power by adding more processors, rather than faster ones. This is a tremendous challenge for software designers. It's far too easy for software using multiple processors to burn up a growing fraction of available processing power on coordination overheads like locking, rather than actual work. That is, it's far too easy for software to not scale: to get slower as processors are added. And an important reason for this is simply that scalability is poorly understood. Some programs don't scale because they're badly written, but others don't scale because their goals are fundamentally impossible to accomplish in a scalable way. Programmers lack effective tools for high-level reasoning about software scalability limitations, and thus waste effort on both impossible and uninteresting tasks.
We will produce the first well-grounded and formal reasoning procedure for scalability that is flexible enough to apply to an entire operating system. Our scalability rule links commutativity and scalability. We characterize software interfaces as more or less inherently scalable depending on the contexts in which those interfaces commute: the more commutative an interface (that is, the more often the order of its function calls doesn't matter), the more scalable an implementation can be. We prove that a scalable implementation exists for any commutative context. This idea can already guide software designers in developing easily-scalable interfaces, but we will also provide a set of automated tools for measuring interface commutativity and for finding implementation scalability bottlenecks, and evaluate our ideas in a highly-scalable operating system. The resulting tools and ideas could make scalable software far easier to design and program, and thus help software designers provide the software performance on which so much of our economy depends.