Recommender systems provide personalized suggestions to users of e-commerce, social media, and many other types of applications. As they have become more prevalent, recommender systems have moved from areas of consumer taste to areas with greater social impact and sensitivity, such as financial services, employment, and housing. Concern has grown that personalized recommendations may exhibit bias, produce unfair results, and entrench problems of inequity. This limits the potential utility of recommender systems in environments such as employment, where fair treatment of users is legally mandated. Lack of attention to fairness has also meant that recommender systems have tended to reinforce biases and to limit users' exposure to diverse items. In spite of the importance of this issue to the public and the recent work of researchers, there is little progress on key aspects of fairness-aware recommendation. Companies whose sites depend heavily on personalized recommendation therefore have little guidance from the research community about how to apply fairness-aware recommendation and how to evaluate their efforts relative to the state of the art. At the same time, recommender systems researchers have difficulty making progress in the field because of the lack of established datasets and metrics. This project will make advances in fairness-aware recommendation that make it suitable for real-world applications.
To meet these needs, the project will develop recommendation models and algorithms that can achieve high accuracy, while preserving fairness in multiple inter-sectional dimensions, and explore their effectiveness in three fairness-critical domains: philanthropy, employment, and news. Existing fairness-aware recommendation algorithms have, with few exceptions, been developed and evaluated in contexts where a single dimension of fairness, defining a single protected group, is considered. The research team will extend these algorithms to be sensitive to multiple protected features, and to incorporate multiple sides of the recommendation transaction. It is well known that explanations support users in their use of recommender systems, engendering greater trust. However, the greater complexity of fairness-aware recommendation makes it difficult to produce explanations, and the introduction of fairness objectives may actually decrease trust in some users who may perceive the system as insufficiently responsive to their interests. The project will therefore develop explanation mechanisms for fairness-aware recommendation that support transparency in the application of fairness criteria. Finally, in order to put fairness-aware recommendation research on a firmer foundation, this project will develop techniques for generating synthetic datasets that can be used in developing and evaluating recommendation algorithms. The project will use latent factor methods to represent patterns of user-item associations, including associations with users of different types, and then apply sampling to these factors to generate synthetic data containing realistic rating patterns. The software developed throughout the project will be incorporated into open-source platforms for the benefit of other researchers.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.