The current explosion of virtual and augmented reality (VR and AR) technologies is the subject of much academic, industrial and popular enthusiasm. Unfortunately, these technologies have also been shown to induce a host of serious user complaints, including headaches, nausea and blurred vision. This is particularly problematic for 3D displays and content, and the National Academy of Engineering has identified "enhancing virtual reality interfaces" as a grand challenge for the 21st century. Studies have suggested that the vergence-accommodation conflict (or the decoupling of the intended "virtual focus" from the actual focal plane of a display) is the cause of many of these psychophysical problems. The PI's goal in this project is to address this mismatch by utilizing advances in silicon nanophotonics to develop a prototype natural-to-the-senses VR and AR multifocal display along with the requisite visual computing algorithms for multifocal and automultiscopic displays. The PI will also design and carry out user studies to validate the new technology (display and algorithms). Project outcomes will not only directly impact the fields of computer graphics and visualization, but an array of other fields as well. Multifocal accommodative-accurate stereo visualization of large-scale fluid flow simulations would represent a breakthrough in science and engineering generally, in areas ranging from the design of artificial arterial valves to aerial vehicles, the rational drug design process through protein docking, and in surgical advances that could take advantage of real-time MRI technologies and other high-dimensional medical imaging data. No less important are the potential educational benefits of the new technology, which would unleash the power of VR and AR to immerse young students in worlds to which they would normally not have access, providing excitement, engagement and a breadth and depth of context that is nearly impossible to achieve in a traditional classroom setting.
The heart of this project lies in the design and implementation of a novel nanophotonic phased-array chip that will enable the generation of arbitrary radiation patterns with large-scale phased arrays, which would extend the functionality of phased arrays beyond conventional beam focusing and steering, communication and radar, and would open up new opportunities in image processing, 3D interactive holography, and virtual reality. This project brings together collaborators with significant expertise in 3D computer graphics and scientific visualization, in nanophotonics and plasmonics for the sub-wavelength confinement and steering of light, and in microelectronics, integrated circuits, and microstructure science and technology. The research will involve three thrusts: (a) slow light to significantly reduce the size and power requirements for the nanophotonics chip; (b) separate electronic and optical components of the chip to better optimize each; and (c) develop efficient algorithms to render and validate, through user studies, multifocal 3D graphics in the Fourier domain. The design of the multifocal display is the highest-risk and highest-gain component of the work; the team has contingency plans in place to proceed if necessary with developing multifocal and automultiscopic display algorithms, and for conducting user studies, employing existing multiple-focal-plane displays.