Model checking is emerging as a popular technology for reasoning about behavior properties of a wide variety of software artifacts including: requirements models, architectural descriptions, designs, implementations, and process models. The complexity of model checking is well-known, yet cost-effective analyses have been achieved by exploiting semantic properties of specific software artifacts. Adapting a model checking tool to exploit this kind of "domain knowledge" often requires in-depth knowledge of the tool's implementation. We believe that with appropriate tool support, domain experts will be able to develop efficient model hecking-based analyses for a variety of software models.
To explore this hypothesis, our project is developing BOGOR, a model checking framework with an extensible input language for defining domain-specific constructs and a modular interface design to ease the optimization of domain-specific state-space encodings, reductions and search algorithms. We will use BOGOR to investigate the degree to which customization of model checking algorithms can yield improved scalability. Specifically, we will adapt BOGOR to reason about event-driven component-based design models and to reason about multi-threaded Java programs. We will evaluate the ease with which domain information can be incorporated into the framework and the space/time improvements that can be achieved by exploiting that information.