The project on the Theory and Practice of Accountable Systems investigates computational and social properties of information networks necessary to provide reliable assessments of compliance with rules and policies governing the use of information. In prior research, project leaders have demonstrated that achieving basic social policy goals in open information networks will require increased reliance on information accountability through after-the- fact detection of rule violations. This approach stands in contrast to the traditional mechanisms of policy compliance in network environments that rely on security technology to enforce rules by denial of access to resources at risk of abuse. So, access-based systems must be supplemented with accountability-based systems. To ensure that accountable systems can provide a stable, reliable, trustworthy basis on which to ground social policy arrangements in the future, it is necessary: 1) to research practical engineering approaches to designing these systems at scale, and 2) to develop a theory of the operating dynamics of accountable systems in order to establish what types of accountability assessments can be made, when those assertions are reliable, and what vulnerabilities accountable systems may have to attack, intrusion and manipulation. The key hypothesis to be tested regarding Information Accountability is that people are more likely to comply with rules (social or legal) if they believe that their non-compliance will be noticed. Successful study and development of accountable systems will ultimately enable real people, communities and institutions to take advantage of Information Accountability as a means of achieving better privacy and compliance with other information usage rules.

Project Report

conducted by a team the includes an interdisciplinary group of scholars from MIT and Rensselaer Polytechnic Institute, addresses the most challenging privacy problems we face today – systems in which privacy violations are caused by people who have legitimate access to personal data but use it in ways that are impermissible. This is the canonical privacy conundrum the world faces – how can we protect privacy in a world where much if not all personal data has been collected somewhere for some legitimate purpose? While there may be enormous benefits to the analysis of such data, there are also real privacy harms if this data is abused. Consider an electronic medical record system in hospitals: a wide variety of hospital staff may need access to data in emergency situations, but they are only supposed to use this data for treatment purposes. Such a rule cannot be enforced or even audited after-the-fact solely through technology that limits access to data. Recent disclosures about broad collection of metadata about Internet users communication activities by the US National Security Agency sharpen the need for a new approach to privacy protection. Law enforcement and national security agencies may need broad access to personal data in order to defend the country, but such data can be subject to misuse unless systems can be put in place that formalize rules about how data can and cannot be used. Then, given the large scale of data collection and analysis now common (aka ‘big data’), we need to be able to design reliable, scalable computational mechanisms to help prevent against such misuse. The research team has pioneered the design of ‘accountable systems’, information systems that are able to detect violations of rules and hold those who break rules accountable for their actions. The overall goal of this research is to ensure that accountable systems can provide a stable, reliable, trustworthy basis on which to ground social policy arrangements in the future. In furtherance of this goal, the TPAS project has developed practical engineering approaches to designing these systems at scale, and explored a theory of the operating dynamics of accountable systems in order to establish what types of accountability assessments can be made, when those assertions are reliable. Through NSF-supported research, we have developed working systems that can represent information rules in machine-readable policies and then use those policies to compute automated assessments of whether information usage in a given system complies with the applicable rules. Not only are we able to determine whether or not information usage complies with the rules, but also we can offer the user human-readable explanations of how the system make its policy analysis. The reasoning done by our computerized reasoning system is presented in a form that is close to the legal reasoning often employed by privacy lawyers. Our research also addresses the fact that organizations are often subject to many different rules. Using linked data technology, we are able to bring together a number of different rule sets and apply them to diverse data formats. Recognizing that privacy implicates significant legal and public policy issues, our research has been conducted in close cooperation with legal and policy experts. We have developed technical tools that meet the needs of legal experts and conducted research workshops that bring together computer scientists, legal scholars and government regulators and policy officials. These interdisciplinary activities will help assure that our research has broader impact on the privacy policy issues that have motivated our work.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Application #
0831442
Program Officer
Jeremy Epstein
Project Start
Project End
Budget Start
2008-10-01
Budget End
2013-09-30
Support Year
Fiscal Year
2008
Total Cost
$1,200,000
Indirect Cost
Name
Massachusetts Institute of Technology
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02139