This project, seeking to codify best practices for machine checking programming language (PL) research papers, proposes to develop a machine checked textbook of basic results in PL theory. It also seeks to build better libraries, proof tactics, and other tools for such work, and to package them for the community. The infrastructure, assembling a comprehensive set of educational and technological resources that embody and advance the current best practices in proof-assistant support for research in PLs, includes:
-A carefully researched and clearly articulated set of best practices for machine checking of work in programming languages (including a choice of a particular proof assistant and some critical decisions about fundamental representation strategies). -Tools, libraries of theorems, and proof automation tactics for a range of structures (such as syntax with variable binding, typing contexts, heaps, etc.) commonly found in PL research. -A corpus of web-accessible and community-extensible resources for researchers and students to learn about how to mechanize their own work, in the form of both tutorials and large-scale, well-documented, and polished examples illustrating a range of topics and techniques. -An integrated textbook on machine checking proofs about PLs.
This work, lying in the intersection of PLs and theorem proving, seeks to extend the POPLMark challenge to forge a community consensus on tools and representational choices for various problems in PL type systems.
Broader Impact: Programming languages form the foundation for software systems on which science and society increasingly depend. This research promotes better and more secure programming languages; hence the potential impacts are bound to be extremely broad and deep. The technical results might have a positive impact on the trustworthiness of the national critical computing infrastructure. Underrepresented populations are encouraged to participate in workshops and funds are identified to provide student scholarships for the needy.