This supplement will address the unintended consequences and collateral damage that arise when facial recognition software is used for medical purposes, such as for syndrome diagnosis as in our Facebase-funded project.
In Aim 1, we will determine whether the accuracy of this technology varies based on self-reported race, sex and age. In this aim, we examine our existing database for evidence of bias based on self-reported race, sex or age. We further determine the extent to which these variables influence classification performance. To the extent sample sizes allow, we will carry this analysis to the level of specific syndromes. Finally, we will use anonymized reference datasets of non-syndromic faces to compare false positive rates based on NIH race definitions, sex and age. The outcome of this aim is to objectively establish bias and estimate the effects of under-representation across race, age and sex categories within our data.
In Aim 2, we will determine how the reports of race-, sex- and age-based bias in facial recognition technology may influence views of the technology and its application amongst researchers and clinicians.
This aim will establish the extent to which the storing of large databases of facial images and the application of machine learning processes to them for diagnostic purposes may raise privacy concerns. The concerns investigated will include potential hacks into protected health information; fear relating to the bias in some facial recognition software (and, potentially, in the Facebase database); and fear of discrimination in the application of the technology, such as by insurers. The outcome will be a white paper that targets a high-profile journal, summarizing the findings and defining crucial issues that should guide the development of facial imaging for disease diagnosis and clinical usage.
Our supplement application will address the important question of unintended consequences and collateral damage when facial recognition software is used for medical purposes, such as for syndrome diagnosis as in our Facebase-funded project. The use of large facial recognition databases in medicine represents a frontier that arrives with tremendous potential but undeniable risks. Our central aims are: (1) determine whether the accuracy of this technology varies based on self-reported race, sex and age; and (2) determine how the reports of race-, sex- and age-based bias in facial recognition technology may influence views of the technology and its application amongst researchers and clinicians.