Artificial intelligence (AI) has significant applications to many data-intensive emerging domains such as automated vehicles, computer-assisted medical imaging, behavior analysis, user authentication, cybersecurity, and embedded systems for smart infrastructures. However, there are unanswered questions relating to trust in AI systems. There is increasing evidence that machine learning algorithms can be maliciously manipulated to cause misclassification and false detection of objects and speech. With the growing adoption of AI-based techniques, it is therefore important to teach students the skills needed to analyze vulnerabilities in AI-based systems and how such systems may fail, as well as how to mitigate such issues to help create more trustworthy AI-based systems. This project brings together experts from the areas of education, AI, and cybersecurity to identify challenges and potential solutions to teaching topics in trustworthy AI with the goal of evolving coursework that will appeal to, and engage, a diverse student body. It is critical to diversify the workforce operating at the intersection of cybersecurity and AI because AI-based systems can be prone to implicit vulnerabilities and blind spots due to imbalanced datasets or training methods that focus only on the overall accuracy of available datasets.

The project team proposes to teach and study three courses at the intersection of cybersecurity and AI, including creating a new course on trustworthy AI. Coursework will address topics that will spur students to consider how segments of the population may be differentially impacted in areas such as authentication, privacy, and user safety. Learning science and educational psychology approaches (specifically focus groups and clinical interviews) will be used to identify learning and teaching challenges and to characterize conceptions and misconceptions. The project will produce five deliverables: model curricula at the crossroads of cybersecurity and AI; strategies for managing cross-disciplinarity in such curricula; characterizations of student concepts; identification of student learning challenges; and identification of new research directions in cybersecurity and AI. The findings and curricular ideas will be disseminated broadly.

This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Graduate Education (DGE)
Type
Standard Grant (Standard)
Application #
2039445
Program Officer
Nigamanth Sridhar
Project Start
Project End
Budget Start
2020-09-01
Budget End
2022-08-31
Support Year
Fiscal Year
2020
Total Cost
$300,000
Indirect Cost
Name
Regents of the University of Michigan - Ann Arbor
Department
Type
DUNS #
City
Ann Arbor
State
MI
Country
United States
Zip Code
48109