When a program accesses data stored in memory, disk, or on a remote server, its access patterns can leak information about its secret inputs and data. There has been decades of work that investigated how to prevent programs from leaking any information by making their access patterns "oblivious", i.e., independent from their execution. This project is motivated by the significant overhead that past techniques incur. The project introduces and investigates new relaxed notions of access pattern obliviousness, and discovers new algorithms that achieve these notions with a significantly reduced overhead. The project includes training of Ph.D. students and postdoctoral researchers, and mentoring activities focused on high school, undergraduate, and graduate students.
This project rethinks the definition of access pattern privacy, and considers (but not limited to) a new notion called "differential obliviousness". In analogy with differential privacy, differential obliviosness requires that the access patterns resulting from executing a program on similar inputs should be hard to distinguish. The project establishes theoretical understanding, including new lower and upper bounds, of the extent to which various notions of obliviosness impact the performance of programs. The investigators also explore the practical performance of new privacy-preserving algorithms in cloud outsourcing and database scenarios. The project develops open-source libraries for the new, differentially oblivious algorithms and data structures.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.