Advances in the field of artificial intelligence and machine learning have resulted in algorithms and technologies for improving cybersecurity. However, machine learning is also vulnerable to novel and sophisticated privacy attacks that leak information about the data used for learning and prediction. For example, by accessing the prediction results of a machine learning model that discovers the genetic basis of a particular disease, the privacy attack can infer if a certain patient's clinical record was used to train this model. The privacy attack can be discriminatory in the sense that it has a higher successful rate for certain demographic groups (e.g., females) than the other groups (e.g., males). However, none of the existing defense mechanisms against these attacks consider such disparate vulnerability and thus perform disparate efforts across different groups. This raises the serious concern of fair privacy, i.e., how to ensure all groups and individuals are protected equitably?

This project will address the core issues of fair privacy from both technical and social perspectives. The project has five research thrusts: (1) formalizing the concept of fair privacy quantitatively; (2) unveiling the existence of disparate vulnerability to two popularly-studied, machine learning enabled privacy attacks, namely membership inference attack (MIA) and attribute inference attack (AIA), and investigating the underlying causes of such vulnerability unfairness; (3) examining the fairness of the existing defense mechanisms against MIA and AIA, and studying how these defense mechanisms affect vulnerability unfairness; (4) designing effective mitigation mechanisms that enable the defense mechanisms to provide equitable protection against MIA and AIA; and (5) performing extensive social studies to explore important social issues related to fair privacy, and utilizing social science to shape the research of fair privacy. The research outcomes will be disseminated broadly through the development of new courses for both STEM and social sciences curricula, involving students into research through various events and student societies. Students at different levels in both disciplines of STEM and liberal arts will be exposed to cutting-edge research in security, privacy, and machine learning.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
2029038
Program Officer
James Joshi
Project Start
Project End
Budget Start
2021-01-01
Budget End
2024-12-31
Support Year
Fiscal Year
2020
Total Cost
$699,540
Indirect Cost
Name
Stevens Institute of Technology
Department
Type
DUNS #
City
Hoboken
State
NJ
Country
United States
Zip Code
07030