The World Wide Web and other networked information systems provide enormous benefits by enabling access to unprecedented amounts of information. However, for many years, users have been frustrated by the fact that these systems also create significant problems. Sensitive personal data are disclosed, confidential corporate data are stolen, copyrights are infringed, and databases owned by one government organization are accessed by members of another in violation of government policy. The frequency of such incidents continues to increase, and an incident must now be truly outrageous to be considered newsworthy. This project takes the view that when security violations occur, it should be possible to punish the violators in some fashion.
Although "accountability" is widely agreed to be important and desirable, there has been little theoretical work on the subject; indeed, there does not even seem to be a standard definition of "accountability," and researchers in different areas use it to mean different things. This project addresses these issues, the relationship between accountability and other goals (such as user privacy), and the requirements (such as identifiability of violators and violations) for accountability in real-world systems. This clarification of the important notion of accountability will help propel a next generation of network-mediated interaction and services that users understand and trust.
The project's technical approach to accountability as an essential component of trustworthiness involves two intertwined research thrusts. The first thrust focuses on definitions and foundational theory. Intuitively, accountability is present in any system in which actions are governed by well defined rules, and violations of those rules are punished. Project goals are to identify ambiguities and gaps in this intuitive notion, provide formal definitions that capture important accountability desiderata, and explicate relationships of accountability to well studied notions such as identifiability, authentication, authorization, privacy, and anonymity. The second thrust focuses on analysis, design, and abstraction. The project studies fundamental accountability and identifiability requirements in real-world systems, both technological and social. One project goal is to use the resulting better understanding of the extent to which accountability is truly at odds with privacy and other desirable system properties to design new protocols with provable accountability properties. Building on that understanding and insights gained in designing protocols, the project also addresses fundamental trade-offs and impossibility results about accountability and identifiability in various settings. The broader impacts of the work include not only engagement with students but also a new perspective on real world accountability in trustworthy systems.
``Accountability'' has recently been recognized as a mechanism for promoting security. We present a formal definition of accountability in information systems. The definition is more general and potentially more widely applicable than the accountability notions that have previously appeared in the security literature. In particular, it treats in a unified manner scenarios in which accountability is enforced automatically and those in which enforcement must be mediated by an authority; similarly, the formalism includes scenarios in which the parties who are held accountable can remain anonymous and those in which they must be identified by the authorities to whom they are accountable. Essential elements include event traces and utility functions and the use of these to define punishment and related notions. Presented at and published in the proceedings of the 2011 ACM New Security Paradigms Workshop. Global Internet routing involves coordination among mutually distrustful parties, leading to the requirements that BGP ("border gateway protocol," the most widely deployed global-routing protocol) provide policy autonomy, flexibility, and privacy. BGP provides these properties via the distributed execution of policy-based decisions during the iterative route computation process. This approach has poor convergence properties, makes planning and failover difficult, and is extremely difficult to change. To rectify these and other problems, we propose a radically different approach to global route computation, based on secure multi-party computation (SMPC). Our approach provides stronger privacy guarantees than BGP and enables the deployment of new policy paradigms. We report on an initial exploration of this idea and outline future directions for research. Presented at and published in the proceedings of 2012 ACM Workshop on Hot Topics in Networks. As organizations and individuals have begun to rely more and more heavily on cloud-service providers for critical tasks, cloud-service reliability has become a top priority. It is natural for cloud-service providers to use redundancy to achieve reliability. For example, a provider may replicate critical state in two data centers. If the two data centers use the same power supply, however, then a power outage will cause them to fail simultaneously; replication per se does not, therefore, enable the cloud-service provider to make strong reliability guarantees to its users. Zhai et al. (2013) present a system, which they refer to as a structural-reliability auditor (SRA), to discover common dependencies in seemingly disjoint cloud-infrastructural components (such as the power supply in the example above) and quantify the risks that they pose to his service. We focus on the need for structural-reliability auditing to be done in a privacy-preserving manner. We present a privacy-preserving structural-reliability auditor (P-SRA), discuss its privacy properties, and evaluate a prototype implementation built on the Sharemind SecreC platform. P-SRA is an interesting application of secure multi-party computation (SMPC), which has not often been used for graph problems. It achieves acceptable running times even on large cloud structures by using a novel data-partitioning technique that may be useful in other applications of SMPC. Presented at and published in the proceedings of the 2013 ACM Workshop on Cloud-Computing Security. We use our acccountability framework to define notions of ``open'' and ``closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system. Presented at and published in the proceedings of the 2014 ACM Symposium and Bootcamp on the Science of Security. We address the question of whether intelligence and lawenforcement agencies can gather actionable, relevant information without conducting dragnet surveillance. We formulate principles that effective, lawful surveillance protocols should adhere to in an era of big data and global communication networks. We then focus on intersection of cell-tower dumps, a specific surveillance operation that the FBI has used effectively. We present a system that computes such intersections in a privacy-preserving, accountable fashion. Preliminary experiments indicate that the system is efficient and usable, leading us to conclude that privacy and accountability need not be barriers to effective intelligence gathering. Presented at and published in the proceedings of the 2014 USENIX Workshop on Free and Open Communications on the Internet.