Artificial intelligence (AI) has recently approached or even surpassed human-level performance in many applications. However, the successful deployment of AI requires sufficient robustness against adversarial attacks of all types and in all phases of the model life cycle. Although much progress has been made in enhancing the robustness of AI algorithms, there is a lack of systematic studies on hardware-oriented vulnerabilities and countermeasures, which also opens up demand for AI security education. Given this pressing need, this project aims at exploring novel hardware-oriented adversarial AI concepts and developing fundamental defensive strategies against such vulnerabilities to protect next-generation AI systems.

This project has four thrusts. In Thrust 1, this project will exploit new adversarial attacks on deep neural network systems, featuring the design of an algorithm-hardware collaborative backdoor attack. Then in Thrust 2, it will develop methodologies that incorporate the hardware aspect into defense for enhancing adversarial robustness against vulnerabilities in the untrusted semiconductor supply chain. Subsequently, in Thrust 3, this project will develop novel signature embedding frameworks to protect the integrity of deep neural network models in the untrusted model building supply chain and finally in Thrust 4, it will model recovery strategies as an innovative approach to mitigate hardware-oriented fault attacks in the untrusted user-space.

This project will yield novel methodologies for ensuring trust in AI systems from both the algorithm and hardware perspectives to meet the future needs of commercial products and national defense. In addition, it will catalyze advances in emerging AI applications across a broad range of sectors, including healthcare, autonomous vehicles, and Internet of things (IoT), triggering widespread implementation of AI in mobile and edge devices. New theories and techniques developed in this project will be integrated into undergraduate and graduate education and used to raise public awareness and promote understanding of the importance of AI security.

Data, code and results generated in this project will be stored when appropriate in the research database managed by the Holcombe Department of Electrical and Computer Engineering at Clemson University. All data will be retained for at least five years after the end of this project or at least five years after publications, whichever is later. Longer periods will apply when questions arise from inquiries or investigations with respect to research. The project repository will be maintained under http://ylao.people.clemson.edu/hardware_AI_security

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Application #
2047384
Program Officer
Alexander Jones
Project Start
Project End
Budget Start
2021-05-01
Budget End
2026-04-30
Support Year
Fiscal Year
2020
Total Cost
$114,546
Indirect Cost
Name
Clemson University
Department
Type
DUNS #
City
Clemson
State
SC
Country
United States
Zip Code
29634