Machine learning has made tremendous advances in the past decade, and is rapidly becoming embedded in our daily lives. We experience its power directly when we interact with voice assistants and automated translation engines, which are improving rapidly every year. Machine learning tools also enable many of the functionalities underlying search engines, e-commerce sites and social media. Thus, machine learning has become an essential component of cyberspace and our interactions with it, and is now poised to enter our physical space, for example, as a core component of perception for autonomous vehicles and drones. Much of the recent progress in machine learning has been in the area of multilayer, or deep, neural networks, which can be trained to learn complex relationships by leveraging the availability of large amounts of data and massive computing power. However, before we rely on such capabilities for safety-critical applications such as vehicular autonomy, we must ensure the robustness and security of deep networks. Recent research shows, for example, that deep networks can be induced to make errors (e.g., to misclassify images) by an adversary by adding tiny perturbations which would be imperceptible to humans. This project develops a systematic framework for defending against such adversarial perturbations, blending classical model-based techniques with the modern data-driven approach that characterizes machine learning practice today. The project will be validated through two key applications of deep learning: image classification and speech recognition.

When the vulnerability of deep networks to adversarial perturbations was discovered a few years back, it was initially conjectured that this vulnerability is due to the complex and nonlinear nature of the neural networks. However, there is now general agreement that this vulnerability is actually due to the excessive linearity of deep networks. Motivated by this observation, this project aims to develop a systematic approach to study adversarial machine learning by utilizing the sparsity inherent in natural data for defense, and locally linear models of the network for attack. The proposed approach is based on exploiting signal sparsity to develop provably efficient defense mechanisms. In particular, the project first investigates a sparsifying frontend, designed to preserve desired input information while attenuating perturbations before they enter the neural network. This then leads to a defense mechanism based on sparsifying the neural network, with the goal of mitigating the impact of an adversarial perturbation as it flows up the network. The methodology brings together ideas from sparse signal processing, optimization, and machine learning, and aims to bridge the gap between systematic theoretical understanding and machine learning practice. The proposal has an extensive evaluation plan that focuses on two important real-world applications of adversarial machine learning: image classification and speech recognition.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-10-01
Budget End
2022-09-30
Support Year
Fiscal Year
2019
Total Cost
$515,856
Indirect Cost
Name
University of California Santa Barbara
Department
Type
DUNS #
City
Santa Barbara
State
CA
Country
United States
Zip Code
93106