Computer users are increasingly faced with decisions that impact their personal privacy and the security of the systems they manage. The range of users confronting these challenges has broadened from the early days of computing to include everyone from home users to administrators of large enterprise networks. Privacy policies are frequently obscure, and security settings are typically complex. Missing from the options presented to a user is a decision support mechanism that can assist her in making informed choices. Being presented with the consequences of decisions she is asked to make, among other information, is a necessary component that is currently lacking.

This work introduces formal argumentation as a framework for helping users make informed decisions about the security of their computer systems and the privacy of their electronically stored information. Argumentation, a mature theoretical discipline, provides a mechanism for reaching substantiated conclusions when faced with incomplete and inconsistent information. It provides the basis for presenting arguments to a user for or against a position, along with well-founded methods for assessing the outcome of interactions among the arguments. An elegant theory of argumentation has been developed based on meta rules characterizing relationships between arguments. Rules for argument construction and evaluation have been devised for specific domains such as medical diagnosis. This project investigates argumentation as the basis for helping users make informed security- and privacy-related decisions about their computer systems. Three specific aims are addressed: 1) Implementation of an inference engine that reasons using argumentation, 2) Facilitate security management through an argumentation inference engine, a rule base specialized for security management, and sensors providing security alerts all enhanced with an interactive front-end. 3) Reason about the consistency and completeness of domain knowledge, as it evolves. To understand the kinds of domain-specific inference rules required, diverse security applications are studied, such as determining if an attack imperils a particular system, finding the root cause of an attack, deciding on appropriate actions to take in the presence of an uncertain diagnosis of an attack, and deciding on privacy settings. Emerging from this project will be a prototype towards the practice of usable security. The team is working with organizations responsible for the security administration of large enterprise networks and will make the prototype tools available to these organizations. The team is working with everyday users from a cross-section of community members. Curricular modules that cover the intersection of argumentation and security are being developed and shared.

Project Start
Project End
Budget Start
2011-08-01
Budget End
2016-07-31
Support Year
Fiscal Year
2011
Total Cost
$264,596
Indirect Cost
Name
CUNY Brooklyn College
Department
Type
DUNS #
City
Brooklyn
State
NY
Country
United States
Zip Code
11210