Multi-core technology promises an inexorable increase in hardware-level parallelism and leaves software developers with two challenges: (1) How to utilize large numbers of processors in order to achieve higher performance, and (2) How to verify the correctness of scalable concurrent programs. This project explores these challenges through the development of a new programming model called "relativistic programming".
Relativistic programming targets shared-memory architectures, but rather than present a consistent global view of memory it allows each processor to have its own relativistic view of memory, containing fresher or staler values than other views. By imposing minimal constraints on each processor's execution, relativistic programming is intended to achieve the scalability characteristics of message passing while embracing the shared memory architecture of modern multi-core hardware. Relativistic programming primitives enable fine-grain control over the visibility of events in the execution of a concurrent program and negate the need to switch among two different programming models for local and remote parts of a computation.
This research involves formally defining the relativistic programming model, exploring its support for formal verification, static and run-time analysis, and developing tools and techniques to facilitate and promote its wide-spread use in operating system kernel development.