Context-Based Hand-Gesture Recognition for the Operating Room Abstract This project proposes the development of a hand gesture-based interface for use in the operating theater. Computers are ubiquitous where surgery is performed and allow access to a repository of images for use before and during surgery. These images are accessed by traditional interfaces, such as mouse and keyboard, and require a third-party intermediary for use during surgery. Currently, the keyboard and mouse represent the most contaminated area in the operating room [9]. The proposed research would provide a sterile means of accessing the images, increasing efficiency, lessening the risk of spread of infection, and thereby reducing healthcare costs. This research is important both in practical and theoretical terms. The long-term goal of this research is to investigate how contextual information about surgical tasks can contribute to improve the robustness of hand gesture recognition systems and how to assess the effectiveness of this interaction. The long-term goal will be achieved by two main specific aims to: (1) improve the robustness of hand gesture recognition algorithms by incorporating contextual cues, such as the physical environment, the type of task and user characteristics, into a gesture-based browsing and manipulation system for healthcare environments;and (2) validate the hand gesture recognition in the operating room using a simulated surgical procedure. The first activity tests the hypothesis that contextual information integrated with visual hand gesture information will significantly improve overall system recognition performance. A hand gesture recognition system will be implemented using sophisticated machine vision techniques, and a study involving a medical image browsing task will be performed. This task will be performed using standard interfaces and hand gesture input with and without context. The second activity tests the hypothesis that integrating this hand gesture interface into the operating theater will improve usability and effectiveness of the image browsing and manipulation task. To test this hypothesis, a surgical procedure will be simulated in which the hand gesture interface system will be used and assessed. The results of this study will be provide subjective and objective measures of incorporating this new modality to the OR, compared to traditional interfaces (keyboard and mice), and will provide an empirical benchmark to prove its impact. 1

Public Health Relevance

Context-Based Hand-Gesture Recognition for the Operating Room Narrative Keyboards, mouse, and touch screens are the main methods of accessing visual information (images) in the operating room. They are also the main channels of contamination in the operating room. We plan to develop an effective sterile surgeon-computer interface for the operating room, for medical image browsing and manipulation. The deployment of this interface has the potential to reduce healthcare-acquired infections (and thereby costs), while providing a more intuitive, fast and reliable way for surgeons to access medical imaging.

National Institute of Health (NIH)
Agency for Healthcare Research and Quality (AHRQ)
Small Research Grants (R03)
Project #
Application #
Study Section
Health Care Technology and Decision Science (HTDS)
Program Officer
Burgess, Denise
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Purdue University
Engineering (All Types)
Schools of Engineering
West Lafayette
United States
Zip Code
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A (2013) Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images. J Am Med Inform Assoc 20:e183-6