This application addresses NIH's call to promote data sharing and patient privacy. A major obstacle to sharing of recorded video has been the need to protect participants' identity. Similarly, concern about stigma is a reason that many people in need of mental health services (e.g., in the military) fail to do so. We propose a system to de-identify patients and research participants in video. Face de-identification transfers facial expression automatically from source face images, which are confidential, to target face images, which are not. The system safeguards face anonymity while preserving the facial expression of the original source video. The target video then can communicate the emotion, communicative intent, pain, and neurological or physiological status of the source person without displaying the source person's face. Face de-identification would enable video archive sharing among researchers and clinicians without compromising privacy or confidentiality. Moreover, a version of this system could potentially be used to preserve privacy and anonymity in internet-based interviews. Innovation. The project has four innovations. The approach (1) Removes identity information while retaining facial dynamics; thus preserving the information value of the face to communicate emotion, pain, and related states. (2) Accommodates subtle and spontaneous facial actions, rather than imitating only some predefined molar expressions (e.g., happy or sad). (3) Requires no training steps by target persons. (4) And requires no hand annotation of video. The system is entirely automatic. Approach. The software will take as input a video with the face of a subject (source) and automatically generate or output a video with the face de-identified. The project will use new machine learning and computer vision algorithms for transferring subtle facial expression from a source subject (original video) to a target subject, using only one frontal image of the target subject. A major novelty of the approach is to make the process completely automatic. The algorithm will be validated using commercially available software for face recognition and custom software for facial expression analysis.

Public Health Relevance

Sharing of video recordings for research and clinical uses would significantly contribute to scientific discovery and patient diagnosis, treatment, and evaluation. We will develop and validate a fully automatic system that preserves facial expression in video while fully protecting face identity.

Agency
National Institute of Health (NIH)
Institute
National Institute of Mental Health (NIMH)
Type
Exploratory/Developmental Grants (R21)
Project #
5R21MH099487-02
Application #
8908047
Study Section
Social Psychology, Personality and Interpersonal Processes Study Section (SPIP)
Program Officer
Friedman, Fred K
Project Start
2014-09-01
Project End
2016-08-31
Budget Start
2015-09-01
Budget End
2016-08-31
Support Year
2
Fiscal Year
2015
Total Cost
$193,512
Indirect Cost
$43,016
Name
Carnegie-Mellon University
Department
Biostatistics & Other Math Sci
Type
Schools of Arts and Sciences
DUNS #
052184116
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213
Jiabei Zeng; Wen-Sheng Chu; De la Torre, Fernando et al. (2016) Confidence Preserving Machine for Facial Action Unit Detection. IEEE Trans Image Process 25:4753-4767
Chu, Wen-Sheng; Zeng, Jiabei; De la Torre, Fernando et al. (2015) Unsupervised Synchrony Discovery in Human Interaction. Proc IEEE Int Conf Comput Vis 2015:3146-3154