While it is anticipated that computer generated virtual reality (VR)-based surgical simulators with both visual and haptic (touch) feedback will significantly improve minimally invasive surgical (MIS) training, leading to substantial reduction in operating room (OR) errors and patient morbidity, existing simulators have not been widely accepted in the medical community due to the following major drawbacks: (1) they lack realistic physics- based simulation algorithms of surgical tool-soft tissue interactions in real time;(2) the computational organ models are not firmly based on experimental data;and (3) they provide primarily psycho-motor skill training, i.e., training in hand-eye coordination and motor skills necessary for tasks such as tool movement, cutting, suturing, etc with little training in cognitive skills associated with higher level mental functions related to workload management, planning, communication, decision-making and problem-solving. In the previous grant period we focused on issues (1) &(2), while in this renewal proposal we plan a logical extension of our work and concentrate primarily on the last issue. Based on adult learning theories [LaWe91] and literature in flight simulation technology [Le05], we argue that the next generation ('Gen2') surgical simulators must provide both cognitive and psycho-motor skill training. Cognitive skill training translates to the following two fundamental requirements: (1) Cognitive fidelity, i.e., the simulator environment should, as closely as possible, replicate the high stress environment in the OR and (2) Cognitive feedback, i.e., it should provide real time assessment of the quality of each task performed, suggesting corrective measures and alternative procedures. The goal of this project is to design, develop and evaluate the next generation ('Gen2') surgical simulators that will provide both cognitive and psycho-motor skill training. To accomplish the goals of the project, a multidisciplinary team has been assembled to achieve the following Specific Aims: SA1) To design and develop the fundamental technology behind Gen2 surgical simulators. SA2) To develop Gen1 and Gen2 simulators for the laparoscopic adjustable gastric banding (LAGB) procedure based on the emerging single incision laparoscopic surgical (SILS) approach. SA3) Establish the validity of Gen1 and Gen2 SILS LAGB simulators, as training tools by conducting experiments at the Skills Lab at Beth Israel Deaconess Medical Center (BIDMC) in Boston to ensure that the scores measured on the simulators reflect the technical skills they intend to measure. SA4) Evaluate the effectiveness of cognitive fidelity and feedback on surgical skills training and learning using the simulators by following the learning curves of subjects over time and measuring their skill retention and quality of diagnostic judgment after training on Gen1 and Gen2 SILS LAGB simulators. Success in our research will establish this technology as a potential standard in next generation surgery simulators and the technology developed will accrue benefits outside of surgical training, e.g., in the design of new surgical tools and surgical techniques.
The goal of this research is to develop and validate a comprehensive computer-based technology that will allow surgical trainees to practice their surgical skills on computer-based models. Surgical procedures and techniques, learnt and perfected in this risk-free manner before application to patients, will translate to fewer operating room errors, reduced patient morbidity and improved patient outcomes resulting in faster healing, shorter hospital stay and reduced post surgical complications and treatment costs.
|Ye, Hanglin; De, Suvranu (2017) Thermal injury of skin and subcutaneous tissues: A review of experimental approaches and numerical models. Burns 43:909-932|
|Nemani, Arun; Ahn, Woojin; Cooper, Clairice et al. (2017) Convergent validation and transfer of learning studies of a virtual reality-based pattern cutting simulator. Surg Endosc :|
|Demirel, Doga; Yu, Alexander; Baer-Cooper, Seth et al. (2017) Generative Anatomy Modeling Language (GAML). Int J Med Robot 13:|
|Qi, Di; Panneerselvam, Karthikeyan; Ahn, Woojin et al. (2017) Virtual interactive suturing for the Fundamentals of Laparoscopic Surgery (FLS). J Biomed Inform 75:48-62|
|Suvranu De, Rahul (2017) A multi-physics model for ultrasonically activated soft tissue. Comput Methods Appl Mech Eng 314:71-84|
|Sankaranarayanan, Ganesh; Li, Baichun; Miller, Amie et al. (2016) Face validation of the Virtual Electrosurgery Skill Trainer (VEST©). Surg Endosc 30:730-8|
|Nemani, Arun; Ahn, Woojin; Gee, Denise et al. (2016) Objective Surgical Skill Differentiation for Physical and Virtual Surgical Trainers via Functional Near-Infrared Spectroscopy. Stud Health Technol Inform 220:256-61|
|Gromski, Mark A; Ahn, Woojin; Matthes, Kai et al. (2016) Pre-clinical Training for New Notes Procedures: From Ex-vivo Models to Virtual Reality Simulators. Gastrointest Endosc Clin N Am 26:401-12|
|Dargar, Saurabh; Akyildiz, Ali Cagdas; De, Suvranu (2016) Development of a Soft Tissue Elastography Robotic Arm (STiERA). Stud Health Technol Inform 220:77-83|
|Dorozhkin, Denis; Nemani, Arun; Roberts, Kurt et al. (2016) Face and content validation of a Virtual Translumenal Endoscopic Surgery Trainer (VTEST™). Surg Endosc 30:5529-5536|
Showing the most recent 10 out of 104 publications