This project will explore how to teach undergraduate computer science students about security in systems that use artificial intelligence (AI). This is important for educating a workforce that is knowledgeable about robust and trustworthy AI. The aim is to design an AI curriculum that will foster a security mindset for identifying vulnerabilities that could cause harm, whether through attacks by a malicious actor, or through perpetuating or amplifying social biases. The educational approach will focus on transparency and contextualization. Transparency involves making the inner workings of a system accessible to students, so they can understand which aspects of the system?s construction lead to its vulnerabilities. Contextualization involves situating AI techniques in real-world environments to understand their specific security implications. Contextualization is fundamental for teaching conventional security topics. For instance, accessing personal location data can serve a legitimate purpose in Google Maps, but is typically suspicious behavior in a game. The same piece of code may be used in each case, but its legitimacy is determined by its broader context. The team will conduct research on, and develop instructional materials and assessment tools for, integrating transparency and contextualization into the undergraduate AI curriculum. Since security in AI is a new area within computer science education research, the main goal is to develop initial designs for instruction and assessment that integrate transparency and contextualization at a level appropriate for undergraduates.

The goal is to develop proof-of-concept instructional materials and techniques, and assessments for security concepts and skills in undergraduate AI courses. Instruction will be designed for four kinds of learning objectives. Students should: (1) know that AI systems can cause harms and are not immune to attacks; (2) be able to explain sources of vulnerabilities; (3) be able to identify vulnerabilities in a specific system, which could include attacking it; and (4) be able to defend an AI system by modifying it to mitigate threats. The team will identify AI topics in existing curricula that have security implications. The team will create tasks that illustrate the concrete security issues and conduct cognitive task analyses with experts in AI and security to see how they approach those problems. This process will yield the initial learning goals. The team will conduct an assessment survey on those goals with students who have taken the undergraduate AI course, to establish a baseline level of knowledge and elicit potential misconceptions. Based on the foundation that students have and the learning goals, the team will design initial instruction, iterating on the design through one-on-one think-alouds and small-group tutoring sessions with student participants. The team will test the instruction in a controlled experiment, comparing the AI plus security materials to AI-only materials, using pre- and post-tests to measure learning. Finally, a study using the designed instruction in an undergraduate course will illustrate how it works in a typical setting.

This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Graduate Education (DGE)
Type
Standard Grant (Standard)
Application #
2041960
Program Officer
Nigamanth Sridhar
Project Start
Project End
Budget Start
2020-09-01
Budget End
2022-08-31
Support Year
Fiscal Year
2020
Total Cost
$300,000
Indirect Cost
Name
University of Utah
Department
Type
DUNS #
City
Salt Lake City
State
UT
Country
United States
Zip Code
84112