Computer users are increasingly faced with decisions that impact their personal privacy and the security of the systems they manage. The range of users confronting these challenges has broadened from the early days of computing to include everyone from home users to administrators of large enterprise networks. Privacy policies are frequently obscure, and security settings are typically complex. Missing from the options presented to a user is a decision support mechanism that can assist her in making informed choices. Being presented with the consequences of decisions she is asked to make, among other information, is a necessary component that is currently lacking.

This work introduces formal argumentation as a framework for helping users make informed decisions about the security of their computer systems and the privacy of their electronically stored information. Argumentation, a mature theoretical discipline, provides a mechanism for reaching substantiated conclusions when faced with incomplete and inconsistent information. It provides the basis for presenting arguments to a user for or against a position, along with well-founded methods for assessing the outcome of interactions among the arguments. An elegant theory of argumentation has been developed based on meta rules characterizing relationships between arguments. Rules for argument construction and evaluation have been devised for specific domains such as medical diagnosis. This project investigates argumentation as the basis for helping users make informed security- and privacy-related decisions about their computer systems. Three specific aims are addressed: 1) Implementation of an inference engine that reasons using argumentation, 2) Facilitate security management through an argumentation inference engine, a rule base specialized for security management, and sensors providing security alerts all enhanced with an interactive front-end. 3) Reason about the consistency and completeness of domain knowledge, as it evolves. To understand the kinds of domain-specific inference rules required, diverse security applications are studied, such as determining if an attack imperils a particular system, finding the root cause of an attack, deciding on appropriate actions to take in the presence of an uncertain diagnosis of an attack, and deciding on privacy settings. Emerging from this project will be a prototype towards the practice of usable security. The team is working with organizations responsible for the security administration of large enterprise networks and will make the prototype tools available to these organizations. The team is working with everyday users from a cross-section of community members. Curricular modules that cover the intersection of argumentation and security are being developed and shared.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Type
Standard Grant (Standard)
Application #
1118077
Program Officer
Sol J. Greenspan
Project Start
Project End
Budget Start
2011-08-01
Budget End
2014-07-31
Support Year
Fiscal Year
2011
Total Cost
$279,032
Indirect Cost
Name
University of California Davis
Department
Type
DUNS #
City
Davis
State
CA
Country
United States
Zip Code
95618