This project aims to improve the practice of shared-memory concurrent programming by exploring a fundamentally new way to specify, verify, test, and monitor how threads communicate via memory. Shared-memory concurrency has become an increasingly important style of programming because it is a common way to utilize multicore processors, i.e., machines where there is more than one processing core, and desktops, laptops, servers, and even mobile devices are increasingly multicore. Shared-memory concurrency is widely recognized as difficult and error-prone, and much prior work has aimed to detect bugs related to this style automatically. This project complements prior work by focusing on application-specific specifications in terms of how different parts of the code-base use concurrency to communicate, rather than focusing on how individual pieces of data are used. This approach aims to improve the quality of software used throughout society, to improve the productivity of software developers and testers, and to influence how students are taught concurrent programming.
At the heart of the approach is a communication graph in which the nodes are program points and the edges indicate communication via shared memory. That is, for each edge, the code that the source node represents performs a write in one thread that is subsequently read in another thread by the code that the target node represents. Such graphs can form the foundation for conceptual and intellectual tools useful throughout the development and maintenance of software, including specifications (declarations of what communication is allowed), static checking (program analysis to infer possible communication), dynamic checking (efficient run-time communication monitoring), testing (design/evaluation of a test-suite in terms of communication coverage), and automatic anomaly detection and bug isolation (in terms of unexpected communication) for deployed software. This project is developing and evaluating tools inspired by this foundation, leveraging synergies across the execution stack, including work on computer architecture, run-time systems, compilers, programming languages, automatic testing, and static analysis.