Deep learning (DL) has become a foundational means for solving diverse problems ranging from computer vision, natural language processing, digital surveillance to finance and healthcare. Security of the deep neural network (DNN) inference engines and trained DNN models on various platforms have become one of the biggest challenges in deploying artificial intelligence. Confidentiality breaches of the DNN model can facilitate manipulations of the DNN inference, resulting in potentially devastating consequences. This project aims to promote broader applications of DNNs in security-critical scenarios by ensuring secure execution of DNN inference engines against side-channel and fault injection attacks.

The project is composed of three salient and interdependent thrusts. SpyNet will study vulnerability of DNNs implemented on mainstream platforms to model reverse engineering via passive side-channel attacks. DisruptNet will investigate the feasibility of active fault injection attacks to disrupt execution of DNN inference engines, and SecureNet will identify protection, detection, and hardening mechanisms for secure execution of DNN inference engines. This project may deepen the understanding of inherent information leakage and fault tolerance of DNN models.

The unprecedented rise of DL technology in diverse application domains has rendered secure execution, primarily confidentiality and integrity, a top priority. This project significantly advances the state-of-the-art on DL implementations, computer architecture and heterogeneous systems, hardware security, and formal methods/verification. Research results and insights on secure DNN design techniques will be incorporated into courses developed by the researchers. The interdisciplinary research will provide unique training and opportunities for graduate and undergraduate students, and industry partners through a newly established Industry-University Collaborative Research Center. The project will leverage the Experiential Education model of Northeastern University to engage undergraduates, women, and minority students in independent research projects.

All the attack library, metrics, methodologies, and software tools will be made available to the public on a dedicated project Website (https://tescase.coe.neu.edu), and the protected and hardened DL models will be released to GitHub to facilitate community usage. The repository will be maintained during and beyond the project.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1929300
Program Officer
Alexander Jones
Project Start
Project End
Budget Start
2019-10-01
Budget End
2022-09-30
Support Year
Fiscal Year
2019
Total Cost
$1,200,000
Indirect Cost
Name
Northeastern University
Department
Type
DUNS #
City
Boston
State
MA
Country
United States
Zip Code
02115