The computational algorithms that analyze our personal data online and in myriad medical, credit card, and other databases can make it increasingly easy to infer personal, intimate details about us (such as our personality, political ideology, or sexual preference) from seemingly mundane data (such as which pages someone has "Liked" on Facebook). People may not notice or know about these risks, and if they do, they must make ongoing decisions about which algorithms they may be providing with their personal information, which to ignore, and which to decry as invasive or unethical. Currently, the behaviors that emerge around these systems are poorly accounted for in existing technology design practices. To address this shortcoming, this work will answer the questions: How do people navigate a world in which data collection and analysis is a continuous feature of their environment? What strategies do people adopt based on their varied privacy perceptions, attitudes, and needs? How can interfaces and other features of internet systems be designed to support different styles of data privacy?
This proposal uses search-related behaviors as a research and design context to examine user adaptation and response to pervasive data collection. Through a series of surveys, the project team will develop a validated measure of privacy patterns or styles related to the technologies, tools, and beliefs that people habitually use to guide their data privacy behaviors. The researchers also will conduct qualitative interviews with users of data protection tools. These interviews will establish what specific strategies these users employ to protect their privacy from algorithmic analysis, if any, as well as the mental models that give rise to efficacy beliefs about those strategies. They will create a collection of experimental prototype tools to explore the design space around algorithmic privacy. Collectively, this work will expand our technology design vocabulary in terms of being able to account for different styles of privacy practices. This research has the potential to significantly improve privacy policies and technologies by describing the specific challenges that people face as they attempt to protect their privacy.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.