Making the ``right'' privacy decision --- that is, balancing revelation and protection of personal information in ways that maximize a user's welfare --- is difficult. The complexity is such that our judgments in this area are prone to errors, leading to decisions that we may later stand to regret. These errors stem from lack of information or computational ability; but also from problems of self-control and limited self-insight. Our research focuses on designing and testing systems that anticipate and counter cognitive and behavioral biases that hamper users' privacy (as well as security) decision making. Our approach is informed by the growing body of behavioral economics research on ?soft,? or asymmetric, paternalism, as well as by research in behavioral decision research and usability. Inspired by these streams of research, we design and study systems that ``nudge'' users towards certain privacy or security behaviors ? which the users themselves have stated to prefer, or which empirical evidence has demonstrated to be beneficial. Helping users avoid mistakes, decrease regret, and achieve more rapidly the desired balance between sharing and protecting personal information has clear, and significant, societal importance. However, our research will also inform the work of privacy (and security) technologists and policy makers by advancing our understanding of what makes privacy decision making difficult, and how to counter biases that adversely affect privacy- and security-sensitive behavior.