Because software failures can and do cause severe, even life-threatening losses, effective quality assurance remains a constant concern for software developers. In fact, over the past decades, numerous software analysis techniques have been developed to address this concern. These techniques represent a powerful means of detecting bugs or proving their absence. Despite their theoretical superiority, static program analysis tools have had relatively limited industry adoption. Static analysis tools aiming for practical solutions are forced to approximate, trading off precision (i.e., better modeling to ensure correctness) against performance (i.e., faster analysis). Finding the right balance of the complex tradeoffs between performance and precision when developing and using static analysis tools is extremely challenging. This project seeks to reduce practical barriers to conquering this tradeoff. Successful outcomes of this project are likely to improve static analysis tool adoption rates, and thereby improve the safety, security and functionality of critical software that society depends upon.

This project aims to achieve more effective static analysis design and usage through cohesive development and usage lifecycle that is powerfully augmented with automated support. This automated support includes systematic evaluation and generation of benchmarks for static analysis tools, localizing sources of imprecision and performance bottlenecks, configuring tool settings that are likely to produce correct and timely results, using machine learning approaches to identify and filter false positives, and integrating these improvements into a demonstration system that leverages information and experiences coming from both tool developers and tool users. This augmented and automated lifecycle will identify frequently occurring code patterns that significantly affect performance/precision tradeoffs in specific tools, allowing tool developers to quickly improve their tools. It will also enable tools designed to customize their behavior and analysis approaches to specific target programs. At the same time, this will provide static analysis tool users with automated support for tuning tool configurations to quickly get more effective results. This is supported by automated classification of tool error reports, reducing effort wasted investigating false positives. These improvements used in concert with each other will result in greatly improved static analysis tools, and much-increased use of these tools in analyzing real-world software.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-10-01
Budget End
2023-09-30
Support Year
Fiscal Year
2020
Total Cost
$249,944
Indirect Cost
Name
University of Texas at Dallas
Department
Type
DUNS #
City
Richardson
State
TX
Country
United States
Zip Code
75080