Over the past eight years of continuous funding by NIH, our laboratory has developed the concept of in-situ image guidance, which permits a real-time tomographic scan to be displayed as a virtual image within the patient, superimposed on the clinician's direct view of hands, tools, and target. One result has been a new ultrasound device, the Sonic Flashlight (SF), with clinical trials successfully demonstrating the placement of deep vascular catheters, and a substantial body of psychophysics research validating the approach. The present grant extends the concept of in situ image guidance into the magnified action space of microsurgery. The surgical microscope is a well-established clinical tool, which provides an altered but usable form of hand- eye coordination. Since photons from a virtual image behave exactly as if they emanated from a real image, the in situ image under a surgical microscope permits the viewer to use binocular disparity, convergence, and accommodation as if the image were a real target. Incorporation of in situ images into the surgical microscope is, to our knowledge, completely novel. Significant differences arise from the SF in the magnified workspace, particularly in the perception of depth, and little research has been done in this area. Our preliminary results viewing the cornea through a surgical microscope with optical coherence tomography (OCT) images displayed in-situ have generated strong enthusiasm from our colleagues in ophthalmology, motivating us to develop a clinical system for a number of important surgical applications in the anterior segment of the eye, including those to treat glaucoma and pathology of the cornea. Towards this goal, we propose the following Aims: (1) Implement a new in-situ OCT display apparatus for the surgical microscope, (2) Develop 3D rendering techniques using the in-situ display and stereo shutters, (3) Develop methods for automated analysis of target and surgical tool location, and (4) Determine psychophysical factors and validate microsurgical in-situ guidance device. Upon achieving these four Specific Aims we will be positioned, with a follow-on grant, to test our image guidance system in vivo, first on animals and soon thereafter on patients. We expect to write a competitive renewal for such trials during the final year of the current proposal.
We propose development of a new device to display of Optical Coherence Tomography images under a stereo surgical microscope, for guidance of surgical procedures by means of an in-situ virtual image that floats within the target tissue at its actual location. The initial clinical application is the treatment of glaucoma and pathologies of the cornea, but further-ranging significance includes improved guidance of microsurgical procedures in general.
|Lee, Randy; Klatzky, Roberta L; Stetten, George D (2017) In-Situ Force Augmentation Improves Surface Contact and Force Control. IEEE Trans Haptics 10:545-554|
|Galeotti, J; Macdonald, K; Wang, J et al. (2017) Generating an image that affords slant perception from stereo, without pictorial cues. Displays 46:16-24|
|Horvath, Samantha; Macdonald, Kori; Galeotti, John et al. (2017) Slant Perception Under Stereomicroscopy. Hum Factors 59:1128-1138|
|Gershon, Pnina; Klatzky, Roberta L; Lee, Randy (2015) Handedness in a virtual haptic environment: assessments from kinematic behavior and modeling. Acta Psychol (Amst) 155:37-42|
|Wu, Bing; Klatzky, Roberta; Lee, Randy et al. (2015) Psychophysical evaluation of haptic perception under augmentation by a handheld device. Hum Factors 57:523-37|
|Horvath, Samantha; Galeotti, John; Siegel, Mel et al. (2014) Refocusing a scanned laser projector for small and bright images: simultaneously controlling the profile of the laser beam and the boundary of the image. Appl Opt 53:5421-4|
|Klatzky, Roberta L; Gershon, Pnina; Shivaprabhu, Vikas et al. (2013) A model of motor performance during surface penetration: from physics to voluntary control. Exp Brain Res 230:251-60|
|Wu, Bing; Klatzky, Roberta L; Stetten, George D (2012) Mental visualization of objects from cross-sectional images. Cognition 123:33-49|