This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
The objective of this research is to enable highly mobile computers and cellular phones with the image interpretation capabilities of large bench-top computers. The approach combines new event-based hardware and energy-efficient algorithms for the identification of objects in an image stream. The algorithmic core of this research is a lightweight implementation of a biologically-plausible engine for size and position independent object recognition in images. The use of event-based sensors and processing hardware has the potential to increase computational throughput and synthetic vision efficiency.
With respect to intellectual merit, the proposed research is intended to give event-based imaging devices the ability to extract specific information from visual streams. The size- and position-independent algorithms implemented in hardware are an enabling technology for efficient object recognition, and are necessary for the advancement of event-based artificial vision platforms. The research targets the design of event-driven processing hardware to extract object features with lightweight digital circuitry and utilizes spatial and temporal redundancy to increase the identification reliability.
With respect to potential broader impacts, this research has the potential to allow a cellular phone to serve as a virtual cane for the blind, help assisted-living patients, provide environmental awareness, and extend human senses. More broadly, this research can be applied to artificial vision, robot vision, image sensor networks, assisted-living systems, and monitoring systems. The project leverages the biomimetic nature of the approach to train students, particularly women and students from other underrepresented groups and to motivate them to pursue careers in research and academia through seminars, classes, and projects.