Artificial intelligence (AI) is being rapidly deployed in many security-critical applications. This has fueled the use of AI to improve cybersecurity via speed of reasoning and reaction (AI for cybersecurity). At the same time, the widespread use of AI introduces new adversarial threats to AI systems and highlights a need for robustness and resilience guarantees for AI (cybersecurity for AI), while ensuring fairness of and trust in AI algorithmic decision making. Not surprisingly, privacy-enhancing technologies and innovations are critical to mitigating the adverse effects of intentional exploitation and protecting AI systems. However, resources for AI-cybersecurity cross-training are limited, and even fewer programs integrate topics, techniques and research innovations pertaining to privacy in their basic curricula covering AI or cybersecurity. To bridge this cross-training gap and to advance AI-cybersecurity education, this project will create a pilot program on privacy-enhancing AI-cybersecurity cross-training, which will provide a transformative learning experience for students. The results of this project will provide students with the AI-cybersecurity knowledge and skills that will enable them to enter the workforce and contribute to the creation of a secure and trustworthy AI-cybersecurity environment that simultaneously supports AI safety, AI privacy and AI fairness for all.
The intellectual merit of this project stems from the development of a first-of-its-kind research and teaching methodology that will provide effective AI-cybersecurity cross-training in the context of privacy. This will include developing a privacy foundation virtual laboratory (vLab) and three advanced topic vLabs, each representing a unique educational innovation for AI-cybersecurity cross-training. The AI for Security vLab will enable students to learn that privacy is a critical system property for all AI-enabled cybersecurity systems and applications. The Security of AI vLab will assist students in learning that privacy is an important safety guarantee against a variety of privacy leakage risks. The AI Fairness and Trust vLab will empower students to learn that privacy is an essential measure of trust and fairness of AI systems by ensuring the right to privacy and AI ethics for all. By participating in these vLabs, students will learn to use risk assessment tools to understand new vulnerabilities to attack of AI models and to design risk-mitigation tools to protect AI model learning and reasoning against security or privacy violations and algorithmic biases.
This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.