Machine learning-based autonomous navigation presents a unique opportunity for electronic sensors such as cameras to proactively explore their application spaces. For example, an autonomous flying camera can locate infected plants in an agriculture field to prevent the disease spread. Similarly, in an industrial plant, self-flying gas sensors can swiftly identify gas leaks. Most electronic sensors, thus enlivened by machine learning-based self-navigation, can have dramatically elevated use-cases. For practicality, however, the flying vehicle (i.e., drone) must be small enough to be inconspicuous and non-intrusive to users, such as people in offices. The small size of the drones is also necessary to navigate through constricted spaces. Since a tiny drone can only carry a tiny battery's payload, minimizing power dissipation for onboard processing of machine learning-based navigation is quite critical. Furthermore, flying space can be highly dynamic, e.g., there will be a movement of people and changes in lighting conditions in indoor applications. Therefore, drone's navigation must be resilient against these factors. This research is expected to develop new hardware implementations for machine learning-based autonomous navigation that can sustain on a tiny battery. The new hardware will also have a minimal footprint for easy integration with tiny drones. The platform will also robustly handle real-world's uncertainties, such as changes in indoor lighting conditions and people's movement. The investigator will also pursue various synergistic educational activities such as organizing workshops on machine learning at local high schools, developing a new course on machine learning hardware, and mentoring undergraduate students through this research.
The investigator will specifically develop a platform for deep learning-based continuous tracking of drone's position and orientation. The platform will operate on visual inputs alone from a camera to minimize the necessary cost and hardware footprint. A compute-in-memory approach will be employed to minimize the power dissipation of the platform. Specifically, the research will investigate the co-designing of navigational models with physical and operating constraints of compute-in-memory to dramatically improve the platform's computational efficiency. To improve the robustness of prediction, deep learning framework of the low-power chip will also be augmented with a probabilistic inference. Using the procedure, not only the prediction itself, but the prediction confidence will also be extracted. Thereby, a drone will be made self-aware of when the mispredictions from deep learning models are likely due to dramatic changes in the flying scene. To operate under uncertainties, the drone will also encompass a computing framework based on probabilistic reasoning. The probabilistic framework will operate by considering many predictive hypotheses and sequentially filtering out the unlikely ones based on measurements. Unlike deep learning-based predictions which are extracted through a single-shot processing flow, predictions from reasoning are more energy expensive by considering a multitude of hypotheses and measurements. Therefore, deep learning and reasoning-based frameworks are also synergistically integrated to concurrently optimize robustness and energy efficiency. Processing cores in the developed platform will be reconfigurable for both deep learning and reasoning models to minimize the necessary resources.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.