The advancement of deep learning techniques, a sub-field of machine learning, is profoundly changing the field of mobile edge computing, thanks to recent research demonstrating that deep learning methods provide significant performance gains. However, the requirement of heavy computations and resources prevent deep learning methods from being widely deployed in mobile edge devices, such as smartphones and Internet of Things (IoT) devices. A significant advantage of enabling deep learning methods in mobile edge devices is that it can drastically reduce the response delay and energy consumption of mobile applications because the computations are executed locally. By removing the barrier that keeps deep learning techniques away from pervasive low-power mobile edge computing devices, this research enables high-accuracy, low-latency applications in future mobile edge computing. In particular, this research systematically investigates the fundamental and challenging issues targeting to significantly reduce the cost of deep learning inference process in mobile edge devices with guaranteed performance. The success of this project could significantly benefit the entire spectrum of deep learning across various research domains, including computer architecture, mobile sensing, cyber security, and human-computer interaction research areas. This project also aims to develop new curricula and encourage the participation of female engineering students.

The primary goal of this research is to build a software accelerator that enables the broad deployment of heavy-cost deep learning models into resource-constrained, heterogeneous mobile edge devices (e.g., low-cost sensing platforms and IoT devices). The basic idea is to develop deep-learning resource management algorithms that can adjust structures of different deep learning models according to hardware constraints of heterogeneous edge devices. More specifically, this research analyzes distinct deep learning behaviors on mobile edge devices and designs different strategies to improve the efficiency of multiple deep-learning-based inference models. Furthermore, this research develops algorithms that can adjust the complexity of different deep learning models to reduce their energy and memory consumption on mobile edge devices. In addition, this project designs power-centric resource reallocation algorithms to verify and deploy the mobile-friendly deep learning models.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-09-01
Budget End
2022-09-30
Support Year
Fiscal Year
2020
Total Cost
$179,999
Indirect Cost
Name
Temple University
Department
Type
DUNS #
City
Philadelphia
State
PA
Country
United States
Zip Code
19122