Wireless imaging is an important tool for non-disruptive environmental monitoring, including habitat monitoring of birds or endangered species which can provide important insight about behavioral pattern and distribution of endangered species. The key requirement for imagers supporting wireless imaging is that they should draw very low power from battery source such that the battery lasts a long time, since the imagers used for environmental monitoring are typically deployed in locations without access to wired power source. In order to understand the power requirement, operation of wireless imager can be divided into 2 phases – 1) image acquisition and 2) transmission of image over the wireless network. Recent advances in CMOS imager techniques has significantly reduced power consumption during image acquisition phase. However, image transmission still consumes several orders of magnitude higher energy than image acquisition which limits battery life to few weeks. The project will result in ultra-low power wireless CMOS imagers that can run from standard battery source for several months instead of just weeks. While the specific research aims of this project relate to wireless imagers, the same principles can be extended to design high energy-efficiency edge devices for internet-of-things (IoT) and wearable healthcare. The fundamental research topics addressed in this project is likely to appeal to broad set of students and will be leveraged by the investigator for outreach programs involving high school and undergraduate students to motivate them to pursue graduate studies in STEM fields.

To reduce high energy consumed during image transmission, this project will leverage artificial intelligence (AI) to reduce energy transmission by adopting the following two-pronged approach: a) compress raw images, and b) transmit images only upon identification of object-of-interest. Analog circuit design techniques will be used to implement the AI algorithms in hardware at very low area and energy cost. The project has three components – 1) development of AI algorithms to reduce transmission power, 2) design of circuits to implement the AI algorithms on-chip, and 3) validation of the project aims. A multi-task learning AI model will be developed which will perform two shared tasks within the same neural network – a) compress the raw image, b) identify object-of-interest (target animal species in natural habitat) and only transmit compressed image of the object-of-interest. To suppress non-idealities associated with analog design, a hardware-software co-design methodology will be used in which physical transistor models are incorporated into the offline AI model training phase to greatly suppress deviations between software training and hardware implementation results. The project will result in a CMOS chip with the AI model embedded and will be tested with images of wildlife from publicly available dataset (such as CIFAR-100).

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-06-01
Budget End
2022-05-31
Support Year
Fiscal Year
2019
Total Cost
$174,663
Indirect Cost
Name
Suny at Buffalo
Department
Type
DUNS #
City
Buffalo
State
NY
Country
United States
Zip Code
14228