Significance Omission of steps, wrong sequencing and excessive force in laparoscopic surgery lead to unnecessary suffering and costly litigation. Due to rapidly advancing techniques, technology and work hour restrictions, only a fraction of the full spectrum of lap procedures and safety concerns can be addressed by current basic dexterity training or by 'see-one-do-one' in the OR. Computer-based training with force-feedback promises to help fill the gap. But commercial products focus on standard cases and do not allow surgeon-educators themselves to select and fine-tune anatomy, pathology and technique. Innovation The proposed authoring environment will broaden the range, deepen the specificity and speed up innovation in computer-based training by enabling surgeon-educators (a) to define their focus and scope via a structured list of surgical steps from which module components are automatically initialized; (b) to fine-tune crucial details that convey surgical expertise, techniqe and insight. This enables growing a rich collection of sharable training modules across a variety of surgical procedures. No similar approach currently exists. - Building on state-of-the-art solutions of tissue physics, GPU-computing and graphics challenges of interactive 3D surgery simulation with force-feedback, this proposal additionally breaks new ground for simulation technology by (a) automatically instantiating modules (instruction pages, VR-scenario, measurement) and (b) enabling surgeons to fine-tune scenarios. Approach (1) For each anatomic/pathological variant of appendectomy, cholecystectomy and gastric bypass, a lead surgeon will draft, and at least three surgeons will critique, a web-based step-by-step list of task and safety issues. In the rare case when no consensus is reached, competing lists and cases are generated. Heads of residency, chief surgeons and a dean for Simulation and Medical Education have signed up to lead the efforts at five institutions. (2) The output, a highly structured list of task and safety issues, enables software to automatically initialize a training module's instructional pages and VR-scenario. This raw module is fine-tuned by the lead author who also sets acceptable performance ranges by executing the VR-simulation. The resulting modules undergo a peer-review cycle. VR-simulation leverages a simulation engine and an extensive database of anatomy and surgical tools developed under R21 funding, published and demonstrated at conferences [35, 37, 34, 14, 57, 58, 79, 56, 44, 36, 76]. (3) The impact of each released module on safety awareness in the OR is measured by a separate small randomized, blinded study at two of the five participating medical centers. (4) The authoring environment and its results will be advertised at major medical conferences (AAMC, ACS CC, ACGME) and its structured code base will be disseminated on the web under the GNU LGP License to encourage broad sharing and enable modification and continued distributed development supported by the stakeholders.
To improve competency and safety-awareness of laparoscopy trainees, we provide and validate a software environment for authoring simulation-based Virtual Reality training modules. This environment empowers surgeon-educators to define the modules' focus and scope, automatically initialize all components of the module and to, by themselves, fine-tune crucial details that convey surgical expertise. This approach encourages growing a rich collection of VR training modules across a variety of surgical procedures.