The rapid development of machine learning in the domain of healthcare presents clear privacy issues, when deep neural networks and other models are built based on patients' personal and highly sensitive data such as clinical records or tracked health data. Further, these models can be vulnerable to attackers trying to infer the sensitive data that was used to build the model. This raises important research questions about how to develop machine learning models that protect private data against inference attacks while still being accurate and useful predictive models, as well as important practical considerations about how these risks to patient data may expose health care providers to legal action based on HIPAA and related regulations. To address these questions, this project will develop a framework, called PrivateNet, for privacy preservation in deep neural networks under model attacks to offer strong privacy protections for data used in deep learning. PrivateNet will be developed on top of commonly used machine learning frameworks, providing ways for the project's findings to have impact in both industry and educational contexts.

A key thrust of the project is to better understand and defend against model inference attacks, including both well-known fundamental model attacks and novel attacks developed through prism of the classical confidentiality and integrity models. Through an extensive analysis of these attacks, the team will develop an understanding of the relative risks of key aspects of learning approaches. In particular, vulnerable features, parameters, and correlations, which are essential to conduct model attacks, will be automatically identified and protected in a novel threat-aware privacy preserving approach based on ideas from differential privacy. Specifically, the team will develop adaptive privacy preserving mechanisms that distribute noise across the most vulnerable aspects of the learning process to provide strong differential privacy protections in deep learning models while maintaining high model utility. The project is expected to lay a foundation of key privacy-preserving techniques to protect users' personal and highly sensitive data in deep learning under model attacks.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1850094
Program Officer
Wei-Shinn Ku
Project Start
Project End
Budget Start
2019-02-15
Budget End
2022-01-31
Support Year
Fiscal Year
2018
Total Cost
$174,006
Indirect Cost
Name
Rutgers University
Department
Type
DUNS #
City
Newark
State
NJ
Country
United States
Zip Code
07102