Biometric authentication, that lets you identify yourself with for example your fingerprint, your voice, or your face, is a very common authentication mechanism these days. Most smartphones and a growing number of other devices and systems feature some form of biometrics. This is partly because they are seen as faster, easier and sometimes more secure alternatives to passwords. However, recent studies suggest that the machine learning methods at the core of biometric authentication systems have serious vulnerabilities. In particular, it is sometimes possible to use what's called a "dictionary attack", where a set of existing tokens can be found which together have a high probability of bypassing the authentication system. In this project, the researchers will study such attacks on various biometric systems, and also find effective defenses for them. The project builds on work where the investigators used modern machine learning approaches to find vulnerabilities in fingerprint and voice authentication. The methods developed will improve the overall security and reliability of biometric authentication mechanisms.

In contrast to well-known spoofing, dictionary attacks do not rely on biometric samples of a targeted individual, e.g., voice recordings or latent prints, but instead exploit weaknessess of the specific biometric modality (or its deployment). They allow targeting of entire populations, and rely on fortuitous matches of common biometric features. Recent advances in machine learning, and in particular in generative models such as Generative Adversarial Networks, have made such attacks possible for biometrics. The goal of this project is to systematically study the security of biometrics in commonly used unsupervised and mobile deployments, e.g., in smartphones, home assistants, IoT devices, or voice calls. The researchers will focus on the newly discovered dictionary attacks on the fingerprint, voice and face modalities. Investigators will study practical threat models and propose attack detection and mitigation strategies. The project will address questions which are focused on understanding this type of vulnerability and its associated attacks, and how they can best be defended against:

- What are the most practical threat models and what are the capabilities the attackers need to posess? - Do the attack strategies generalize between the modalities? - Are the identified "master-examples" universal? Do they transfer between user populations and authentication systems? - What is the optimal mitigation strategy? Is it possible to reliably detect the presented synthetic content? - Is it possible to improve the enrollment policy to maximize security and/or warn the user about higher vulnerability of the enrolled examples?

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1956200
Program Officer
James Joshi
Project Start
Project End
Budget Start
2020-10-01
Budget End
2023-09-30
Support Year
Fiscal Year
2019
Total Cost
$483,437
Indirect Cost
Name
New York University
Department
Type
DUNS #
City
New York
State
NY
Country
United States
Zip Code
10012