Database application programs play a central role in our information based society. Given the ubiquitous and, in many cases, critical nature of these systems, testing whether they behave correctly is of great importance. Until recently, there has been little research focused on how to do this effectively. This research project aims to enhance and evaluate a set of testing techniques for database applications.
Challenges in testing such systems include the large number of database states that must be considered and the need to check whether applications modify the database state correctly. This research builds on previous work, in which tools were developed to generate database states suitable for testing, generate inputs to the application, and check the results of executing the application on those inputs. The first phase of the research focuses on improving the input generation technique, further automating analysis of the application code, and developing new measures of test data adequacy. The second phase focuses on empirical evaluation of how effectively these techniques detect faults in database applications.