This project focuses on improving the effectiveness of computer security warning dialogs ? on-screen prompts that warn users about a potential security risks and give users a choice between two or more courses of action. Security dialogs should help users avoid unsafe actions while allowing them to take safe actions by presenting information that allows users to make informed decisions that the system cannot make with user input. This research takes a novel approach to the design and rigorous evaluation of computer security warning dialogs, with the goal of developing generalizable guidelines for designing effective warning dialogs for software products. This has the potential to help end users make better security decisions that keep their information and computer systems safer, and improve the computer security ecosystem. Based on the Carnegie-Mellon team?s previous work, review of the literature, and discussions with collaborators, they have developed a set of candidate features that will have a significant impact on the effectiveness of security dialogs. For example, these features include: amount and placemen of text, severity of tone, how to help users decide, describing risks and consequences, use of recommended and default options, and more. They plan to systematically study each feature, applied to a variety of security dialogs, to determine the impact of each (individually and in combination) and to develop guidelines on how to use each feature to best effect. They will follow an iterative design and evaluation approach that will involve five types of studies: exploratory interviews, Mechanical Turk studies, laboratory studies, field studies, and interface designer studies. In the Mechanical Turk studies, participants will be provided with a scenario and a security dialog triggered by that scenario and asked how they would be most likely to respond. They will also be asked follow-up questions to learn why they made that decision, their perception of the risks associated with each warning dialog, their understanding of the warning dialog, their beliefs about how well they think they understand the warning dialog, and their knowledge of the concepts and vocabulary included in each dialog. We will measure the tendency for users to take the recommended action in risky scenarios and the non-recommended action in benign scenarios. The follow-up questions will help determine why users behave the way they do and how to most effectively design security warning dialogs to influence that behavior. It is important to determine how to communicate effectively about the risks and consequences, but also to determine how much users need to be able to understand before they make appropriate decisions. It is anticipated that of some aspects of the situation will be correlated with behavior, but that there will be some information that increases understanding with little or no impact on behavior. In addition, the features are likely to have varying impacts on understanding risks and consequences, motivation to take the safe course of action, and behavior. To test the generalizability of the guidelines, a large set of security dialogs from a wide range of software products will be collected. As candidate guidelines emerge, they will apply them to a variety of dialogs in their catalog and also observe which guidelines seem generally applicable and which seem to apply to only certain types of dialogs in our collection. Based on the final set of guidelines, the team will provide a number of example redesigns in a final project report and security dialog design tutorial that they will make publicly available.