Advances in artificial intelligence (AI) have introduced new opportunities and challenges in cybersecurity. Social engineering, while contributing to the majority of cyberattacks, poses a uniquely difficult problem in cybersecurity because of a combination of factors. First, social engineering is low cost and involves multiple increasingly complex and subtle attack models. Second, the majority of computer users are not cybersecurity-literate, with less than 30% judged competent on basic knowledge. Third, social engineering takes advantage of human vulnerabilities such as habit formation and susceptibility to persuasive techniques. This all results in a significant gap in security because individuals are unprepared to counteract social engineering. To address the need to educate casual computer users against social engineering attacks, this project proposes a novel approach that will take advantage of human psychology, just like the attacks themselves do. The project team proposes to create an accessible and engaging learning experience that will promote changes in attitude and behavior in computer users by teaching them about social engineering techniques and how to detect them. This project fills an important gap by focusing on users normally marginalized by current cybersecurity education efforts, including casual computer users or those with computer anxiety, such as the elderly and low-income families.

To address the dual problems of a lack of cybersecurity literacy and increasing social engineering attacks, the multidisciplinary project team proposes to integrate AI techniques to create a customized social engineering education experience that utilizes the principles of entertainment education. This effort will target non-security professionals and will use pretext design maps to train AI systems to generate social engineering scenarios. Transformer-based natural language processing models and humor theory knowledge will be used to generate explainable humorous training schemas based on these social engineering scenarios. The scenarios will then be applied in a classroom setting, where learning patterns and specific psychological markers will be used to refine the AI-generated scenarios. The combination of these approaches will result in an effective cybersecurity pedagogical tool, powered by AI, for casual computer users.

This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Graduate Education (DGE)
Type
Standard Grant (Standard)
Application #
2039605
Program Officer
Victor Piotrowski
Project Start
Project End
Budget Start
2020-09-01
Budget End
2022-08-31
Support Year
Fiscal Year
2020
Total Cost
$299,566
Indirect Cost
Name
Purdue University
Department
Type
DUNS #
City
West Lafayette
State
IN
Country
United States
Zip Code
47907