Artificial intelligence (AI) algorithms have emerged as key drivers of technology and innovation. These algorithms enable large amounts of data to be distilled down to valuable insights. Nearly all industries have been transformed by these technologies, and new industries only possible due to these technologies are emerging. Unfortunately, the general-purpose hardware that enabled the past 50 years of computing is fundamentally unsuited to run AI algorithms efficiently. This research focuses on developing material growth techniques and specialized devices specifically tailored to addressing the needs of next generation computing algorithms. The outcome of this research is materials and devices that can be integrated directly with traditional hardware to improve the performance of AI with respect to speed and power. Thus, the results of this work can impact a wide variety of fields, ranging from semiconductors to self-driving cars. Graduate students working on this project are trained in materials, devices, and algorithms for AI, a critical need for the workforce of the future. Activities to bolster high school students’ math and science skills are incorporated to encourage their interest in science and engineering related fields. A focus of these activities is the inclusion of underrepresented groups. The results of this research are published in journals, presented at conferences, and incorporated into both undergraduate and graduate level classes.

Technical Abstract

The primary goal of this project is to utilize crystalline III-V semiconductors on amorphous substrates as the material and device building blocks for future generations of neuromorphic processors. Neuromorphic computing architectures and systems have the potential to rapidly generate insight from massive datasets with very-low power compared to current von Neumann processor architectures. However, current implementations of analog accelerators based on non-volatile memory elements exhibit unacceptably low classification accuracy on model problems. Additionally, artificial neural network architectures still exhibit ~6-8 orders of magnitude greater energy consumption as compared to the brain. Here, an algorithm driven approach is used to design devices for artificial neural network and spiking neural network accelerators. These devices are fabricated and characterized using III-V high-mobility channels directly grown on amorphous substrates below 400 oC, enabling CMOS back-end integration compatibility. First, the performance limits of III-V’s grown on amorphous substrates are identified. Next, memory devices for artificial neural network synapses are designed, fabricated and characterized. Finally, spiking synaptic devices—artificial devices which mimic biological synapses are explored. The results of this work have the potential to enable specialized hardware for AI to be fabricated directly on traditional processors impacting all areas that presently utilize AI.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Materials Research (DMR)
Application #
2004791
Program Officer
James H. Edgar
Project Start
Project End
Budget Start
2020-07-15
Budget End
2023-06-30
Support Year
Fiscal Year
2020
Total Cost
$125,000
Indirect Cost
Name
University of Southern California
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90089