This research addresses a fundamental problem in systems security: how can a machine specify a computation to another one and then, without executing the computation, check that the other machine carried it out correctly? Over the last several years, a new approach to this problem has emerged, based on refining cryptographic and theoretical tools, and incorporating them into built systems. However, despite exciting advances, the resulting systems are still not practical in the normal sense. This goal of this project is truly practical performance for certain real-world problems at realistic input sizes. This research could have wide-ranging impact on third-party computing models such as cloud computing, distributed systems, untrusted hardware manufacturers, and defense applications. More broadly, truly inexpensive verifiable computation will enable new ways of building systems.
The project applies ideas from the programming languages community to represent computation more efficiently; applies notions of program approximation to produce efficient partial verifiability; refines and implements protocols that promise reduced memory utilization; and develops new protocols that reduce overhead and are better-suited to important classes of computations. The methods include refining and adapting existing theory, implementing systems based on the refinements, modeling performance and costs, and conducting rigorous experimental evaluations. The results will be disseminated through peer-reviewed publications, and through software and experimental configurations that will be available freely.