Individuals’ lives and societal outcomes are increasingly mediated by opaque machine learning algorithms chosen and run by multi-sided online platforms using private data. Although the platforms often claim that their algorithms take into consideration the interests of the sides and preserve privacy, these claims are not well-defined. Furthermore, the platforms’ algorithms also optimize for their own objectives, such as financial or user growth. The resulting algorithmic decision-making systems and their outcomes may be at odds with the interests of platform participants and societal values.
The project is following a research agenda consisting of two main thrusts. The first aims to enable the deployment of differential privacy for data sharing in platform-specific contexts, so as to ensure rigorous privacy protections for platform participants while enabling the platform to pursue its objectives. We take advantage of platform-specific capabilities to develop learning-augmented and security-augmented frameworks for reasoning about and deploying differential privacy.
The second research thrust investigates undesirable consequences of opaque optimizations and proposes definitions that could encode platform participants’ or societal desiderata regarding the outcomes of such optimization. It then analyzes algorithmic, systems, and policy approaches for achieving them and quantitatively evaluates the impact of enforcing such constraints.
Both research thrusts advance the societally important goals of enabling data-driven innovation by multi-sided platforms while preserving privacy and fairness for their participants.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.