The emergence of artificial intelligence (AI) systems that can create hyper-realistic data (e.g., images of human faces or network traffic data) presents challenges both to people and computers trying to determine what is authentic and what is fake. These advances pose both a threat and an opportunity for STEM learners and cybersecurity networks. On one hand, the ability of AI to generate hyper-realistic data has the potential to increase students? interest in AI, STEM, and cybersecurity. On the other hand, AI-generated data, without robust cybersecurity guarantees, have the potential to reduce the veracity of knowledge that is publicly available on-line. This project proposes to conduct a series of studies where learners are presented with AI-generated STEM content and asked to determine its authenticity. The project seeks to discover whether differences exist in the level of vulnerabilities across diverse populations (K-12, higher education, and the adult workforce). The project will lay the foundation for a deeper understanding of the interconnectedness between STEM education materials and cybersecurity networks, and the commonalities that they face when challenged with the presence of hyper-realistic AI-generated data.

This NSF EAGER project brings together researchers from K-12 (Challenger Center), higher education (Carnegie Mellon University), and the workforce (RAND Corporation) to investigate risks posed to the free flow of STEM education materials and computer network traffic data in the age of hyper-realistic AI-generated data. Participants engaged in the study will be randomly shown fake STEM content (i.e., STEM content that is generated by Generative Neural Networks and has been modified to include misinformation) vs. STEM content that is authentic in its communication of STEM information . Each participant will be asked to classify whether the STEM content being displayed is fake or authentic. Additional questions will probe how specific characteristics of the STEM content displayed to participants serve as indicators of authenticity by randomly assigning participants versions of the STEM content that contain or omit those characteristics. The study of different learner populations (K-12, higher education, and the adult workforce) will elucidate the variability that exists amongst learners? ability to decipher factual education material from AI-altered STEM education material, given the age and experience level of different learner populations.

This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Graduate Education (DGE)
Type
Standard Grant (Standard)
Application #
2039613
Program Officer
Nigamanth Sridhar
Project Start
Project End
Budget Start
2020-09-01
Budget End
2021-08-31
Support Year
Fiscal Year
2020
Total Cost
$100,000
Indirect Cost
Name
Carnegie-Mellon University
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15213