Today's software systems suffer from poor reliability and security, with software errors costing the U.S. economy upwards of $60 billion annually. This situation is likely to get worse, as the complexity of software systems increases without a matching increase in the effectiveness of software quality tools and techniques. The situation is particularly challenging for concurrent systems, which pose significantly higher difficulties for software quality tools, yet are becoming more widespread with the growing use of multicore machines.
Static analysis software-quality tools are very precise but do not scale well to large codes. Testing is easy to use but for good coverage requires extensive test suites. We propose to bridge the gap between ad hoc testing and static analysis by combining them in a new scalable technique called predictive testing. Predictive testing uses static program analysis to maximize the effectiveness of a given test-suite for finding in the testing stage bugs that could manifest in real production runs. Although predictive testing tools use complex static analysis and automated theorem proving techniques internally, all of this complexity is hidden from the user by a testing usage model. For this reason, we expect that such tools can be easily integrated into existing software engineering processes and will be usable even by unsophisticated developers.