Software testing is important for increasing software reliability, but expensive and can account for more than half of the software development cost. Automated testing can significantly help programmers to develop and maintain reliable software. However, test automation is mainly limited to test execution, while test generation remains manual and mostly ad hoc, which not only makes it hard to develop tests initially but also to maintain and reuse tests.

To reduce the cost of developing, maintaining, and reusing tests, this project investigates a novel approach to automated testing based on test abstractions. Conceptually, each test abstraction provides a high-level description of a desired test suite: programmers do not need to manually write large suites of individual tests but instead write only test abstractions from which tools automatically generate individual tests. This project investigates five aspects of test abstractions: (1) What languages to use for writing test abstractions? (2) Which tests to generate from test abstractions? (3) How to automatically generate tests from test abstractions? (4) How to determine whether the code under test passed or failed? (5) How to determine which failing tests are caused by the same code error?

Project Report

Software testing is important for increasing software quality but expensive and can account for more than half of the software development cost. Automated testing can significantly help programmers to develop and maintain reliable software. However, test automation is mainly limited to test execution, while test generation remains manual and mostly ad hoc, which not only makes it hard to develop tests initially but also to maintain and reuse tests. To reduce the cost of developing, maintaining, and reusing tests, this project investigated a novel approach to automated testing based on "test abstractions". Conceptually, each test abstraction provides a high-level description of a desired test suite: programmers do not need to manually write large suites of individual tests but instead write only test abstractions from which tools automatically generate individual tests. This project investigated five aspects of test abstractions: (1) What languages to use for writing test abstractions. (2) Which tests to generate from test abstractions. (3) How to automatically generate tests from test abstractions. (4) How to determine whether the code under test passed or failed. (5) How to determine which failing tests are caused by the same code error. The grant partially supported 40 papers (including one award-winning paper, two conference papers invited for journal submissions, and one more paper nominated for a best-paper award), public release of 11 testing tools and datasets (available from http://mir.cs.illinois.edu page on software and data), and training of at least a dozen graduate students (including three PhD theses and four MS theses) and eight undergraduate students. The broader impacts also include the use of test abstractions to find hundreds of bugs in various open-source software projects (linked from the above page), and the research is a step toward better testing tools and frameworks for reducing bugs in software, thus helping to improve the quality of software used in our daily lives.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
0746856
Program Officer
Sol J. Greenspan
Project Start
Project End
Budget Start
2008-06-01
Budget End
2014-05-31
Support Year
Fiscal Year
2007
Total Cost
$406,000
Indirect Cost
Name
University of Illinois Urbana-Champaign
Department
Type
DUNS #
City
Champaign
State
IL
Country
United States
Zip Code
61820