Security and design flaws in artificial intelligence (AI) algorithms and computer systems can leave our personal information, including sensitive data such as medical records, dangerously exposed, or can give rise to biases that disadvantage or threaten parts of the population. The ability to successfully find these security and design flaws before they cause harm depends on qualified engineers, researchers, and policymakers who understand threats to computer systems and algorithms. However, threat-modeling is typically taught only in advanced Computer Science courses, which come late in the curriculum and which not all students elect to take. This project investigates whether earlier and continued exposure to material on threat modeling and a mindset called "adversarial thinking" improves students' ability to recognize and address challenges in privacy, cybersecurity, and new AI technologies. Adversarial thinking refers to adopting the perspective of an adversary who seeks to exploit weaknesses in a system, algorithm, or model. The resulting course materials and findings will be disseminated, and the findings are expected to motivate changes in the approach to computer science curricula.

The project proposes to develop material on adversarial thinking and integrate it into courses at the introductory, intermediate, and advanced level of Brown University?s computer science curriculum. The project team will measure students' performance and progression within each course as well as across courses. The data collected will help answer the project?s central research question: do students who encounter adversarial thinking early in and repeatedly throughout their computer science education show improved ability to recognize and address threats and flaws in computer systems security and AI models? The project will impact academic computer science education through pedagogical methods, skills, and recommendations for curricular structures that help prepare students for the complexities, risks, and opportunities of new technologies.

This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Graduate Education (DGE)
Type
Standard Grant (Standard)
Application #
2039354
Program Officer
Nigamanth Sridhar
Project Start
Project End
Budget Start
2020-09-01
Budget End
2022-08-31
Support Year
Fiscal Year
2020
Total Cost
$297,881
Indirect Cost
Name
Brown University
Department
Type
DUNS #
City
Providence
State
RI
Country
United States
Zip Code
02912