Quality of delivery is known to moderate the effectiveness of drug abuse prevention programs. As research-based programs become widely disseminated, having tools for assessing quality of delivery becomes important for documenting and understanding the conditions under which programs succeed and fail. There are currently no standard methods for measuring the quality with which prevention programs are delivered. The goal of this project is to create a standardized tool for assessing quality of delivery for research-based drug abuse prevention programs.. Commercially, the addition of this product to an existing product (Evaluation Lizard) that provides pretest and posttest surveys tailored to the evaluation of evidence-based programs, will significantly strengthen our niche in the prevention evaluation marketplace. The broad aim of this SBIR project is to develop a tool that will allow the collection of standardized measures of quality of delivery across a wide range of programs being disseminated as a result of their inclusion on the NREPP model programs list and newly developed programs that wish to qualify for future inclusion. During Phase I, we interviewed prevention researchers, program administrators, program developers, and policy makers to define and elaborate on the essential criteria for defining dimensions of quality of implementation. Based on their input, we designed a prototype system that allowed us to accomplish the following tasks: (1) create session and program rating form templates for selected NREPP facilitator-delivered prevention programs, (2) link templates to a database to track form information (program, session, observer, teacher, and date of creation), and (3) create PDF forms printed remotely that can be linked by bar code back to the database. The prototype system measures dosage, adherence, engagement, and adaptation. We also developed a manual for using the system. While functional for the pilot test, this system is not yet fully automated and does not include numerous elements. During Phase II, our first task will be to complete the quality of delivery data gathering system for collecting quality of delivery data that will include tools for assessing adherence, adaptation, dosage, engagement, and measures of overall quality. We will develop online applications that will allow data collected about quality of program delivery to be linked and jointly analyzed in various ways with pretest-posttest outcome data from students. To ensure that the application works as intended, we will conduct alpha and beta tests of all component applications. We will develop an online system for conducting basic analysis and preparing reports that meets the needs of five separate professional groups: program implementers, program administrators, program developers, prevention researchers, and policy makers. We will conduct a field trial with 10 prevention agencies and school districts to demonstrate that the quality of delivery data collection and reporting system provide program implementers, prevention agency administrators and supervisors, and government agency prevention administrators with information useful to documenting and understanding the process of prevention program delivery. Established prevention agencies will adopt the use of the Evaluation Lizard system, including both pretest-posttest surveys and quality of delivery report forms. We hypothesize that participating groups will report satisfaction with the functioning of the system and will report increased awareness of how prevention is being implemented in their area of purview. Further, we will conduct a reliability and validity sub-study in which participating agencies will be asked to select one teacher to video record the lessons they teach so that observation data can be included in the analysis in order to demonstrate reliability and validity of the measures.
The field of drug abuse prevention research has identified numerous interventions that can be effective;however, unless programs are implemented well in practice, they are unlikely to achieve their desired effect, and the effort and expense of these programs may be wasted. The need to assess quality of implementation is widely recognized, but, to date, there is no unified strategy for gathering or reporting about data. The value of the product we are proposing is that it would provide an easy-to-use system for documenting quality of program implementation that could be linked to outcome data.