Machine learning (ML) algorithms and artificial intelligence (AI) systems have already had an immense impact on our society. Lately, AI has been shown to be able to create machine cognition comparable to or even better than human cognition for some applications. AI is also regarded to achieve cybersecurity (i.e., AI for cybersecurity) such as by detecting anomalies, adapting security parameters based on ongoing cyber-attacks, and reacting in real-time to combat cyber-adversaries. However, ML algorithms and AI systems can be controlled, dodged, biased, and misled through flawed learning models and input data. Therefore, ML and AI need robust security and correctness (i.e., cybersecurity for AI) to permit fair and trustworthy AI. Unfortunately, AI and cybersecurity have been treated as two different domains and are not taught as cross-cutting technologies. The primary goal of this project is to explore, develop and integrate a scalable instructional approach for AI-driven cybersecurity and cybersecurity for AI in undergraduate and graduate curricula. This will be accomplished by creating a "learning by doing" environment to address emerging AI and cybersecurity issues that are not covered in an integrated way, if at all, in traditional curricula. This project will help to train the next-generation STEM workforce with knowledge of integrated cybersecurity and AI that will help not only to meet evolving demands of the US government and industries but also to improve the nation?s economic security and preparedness.
The core scientific contributions of the proposed research effort will be the development and enhancement of integrated AI and cybersecurity education and research programs at Howard University by leveraging the proposed Discovery, Analysis, Research and Exploration (DARE-AI) -based experiential learning platform to address emerging issues and challenges. The project team proposes to design, develop, use, and refine reproducible hands-on activities by integrating cybersecurity and AI education and research with open-ended problem-solving activities. The effectiveness of coupling of AI for cybersecurity and cybersecurity for AI in DARE-AI modules will be evaluated. The project team will also design, develop, use, and refine the machine learning model with privacy, security, and distributed learning. Machine learning algorithms and AI systems will be designed, developed, and analyzed for robustness, fairness and the extent to which they make AI systems explainable and accountable. The research results from this project will be disseminated through peer-reviewed publications and presentations. The DARE-AI modules will also be published on the project?s dedicated website to make them available to the public.
This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.