Validation of computing algorithms has been a challenging topic over the last few years. In fact, several international workshops in the medical imaging field started to involve the community through """"""""grand challenges"""""""". A grand challenge involves selecting driving biological/scientific problem and asking experts to submit their best results and methods to solve it. Grand challenges often use blind verification in order to provide an unbiased validation. Validation is critical to science because it imposes to scientists a rigorous protocol before claiming the validity of their algorithms. Validation also ensures that algorithms are clinically viable and will perform with the same robustness and accuracy in the clinic. There is a clear consensus among the scientific community that careful validation is needed. However, validation still remains a challenge and can become a laborious task for several reasons. First, the overall design of the validation experiment should follow strict rules in order to be consistent with the scientific reasoning. For instance, if a registration algorithm uses landmarks as a base for registration, these same landmarks should not be involved during the validation process. Second, the testing and training datasets should be clearly identified and separated. The testing datasets should be used only for testing purposes and not to tune the algorithm. Third and last, the metrics used to measure the error of the algorithm should be relevant to the scientific goal of the research. For instance, only measuring the mean value of the resulting error of segmentation could have critical impact in the clinic if the maximum error is a very high value. Validation remains a difficult task and several tools have emerged to help scientists with validation tasks. The open source Insight Toolkit and Visualization Toolkit provide off the shelf algorithms for medical imaging, making comparison with other methods easier. Grand challenges for segmentation and registration, like the ones hosted at the Medical Image Computing and Computer Assisted Intervention, invite researchers to test their algorithms against each other providing a level of validation. However, no complete infrastructure is currently being offer to the research validation for collection and hosting validation tools.
The aim of this proposal is to develop an infrastructure to help scientists to perform validation tasks. While considered an important element towards full clinical validation, the system does not aim to perform a full clinical validation, but rather help research choose the best tools for their clinical application. The proposed system, named COVALIC, provides an online repository of testing and training datasets, an open source framework for validation metrics and an infrastructure for hosting grand challenges and publishing validation results. Through the online system, researchers can perform validation tasks from the convenience of a web browser. Furthermore, COVALIC is built upon open access and open source, thus engaging the community in the effort and encouraging researchers to share their data, algorithms, metrics and results. We propose to develop and test the system with the help of six experts in the field: clinical researchers, surgeon, computer scientist, and scientific researchers, thus creating a system designed by the end user community.

Public Health Relevance

Validation is a critical component of the development of computing methods and often present major challenges. The main difficulty in comparing performance of algorithms is to define a common reference for the training and testing datasets as well as validation metrics. The other challenge is to access other researchers'results and algorithms. We propose to develop an intuitive web-based system for collecting, distributing and processing validation algorithms. Additionally, we propose to develop an open-source framework for the validation of image processing algorithms.

Agency
National Institute of Health (NIH)
Institute
National Institute of Biomedical Imaging and Bioengineering (NIBIB)
Type
Small Business Technology Transfer (STTR) Grants - Phase I (R41)
Project #
1R41EB011796-01A1
Application #
7999674
Study Section
Special Emphasis Panel (ZRG1-SBIB-Q (90))
Program Officer
Pai, Vinay Manjunath
Project Start
2010-08-01
Project End
2012-07-31
Budget Start
2010-08-01
Budget End
2012-07-31
Support Year
1
Fiscal Year
2010
Total Cost
$237,264
Indirect Cost
Name
Kitware, Inc.
Department
Type
DUNS #
010926207
City
Clifton Park
State
NY
Country
United States
Zip Code
12065