Deep learning architectures such as convolutional neural networks and recurrent neural networks have achieved unprecedented, sometimes super-human accuracy on many modern applications in artificial intelligence, such as image classification and speech recognition. Power dissipation is however a major concern in these energy-hungry machine-learning architectures, and decreasing it requires designs that provide a more energy-efficient combination of hardware and machine-learning algorithms. There is an increased emphasis to leverage parallelism and specialization to improve performance and energy efficiency. To dramatically reduce power consumption, silicon photonics has been proposed to improve performance-per-Watt compared to electrical implementation.

This project leverages photonic technology and heterogeneous multicores for the design of deep-neural network accelerators that improve parallelism, concurrency, energy efficiency and scalability in various machine-learning applications. The first task of the project is concerned with the characterization and identification of photonic devices that can implement accelerator functionalities such as multiply-and-accumulate, summation, and other arithmetic operations. The characterized devices are then inserted into single-layer and multi-layer photonic topologies for implementing accelerator functionality. The second task of the project implements various types of deep-learning architectures on the proposed photonic neural network accelerator to maximize the gains offered by the photonic technology. The third task of the project builds an extensive simulation and modeling infrastructure that combines the photonic technology, network architectures, accelerator functionality, and machine-learning algorithms developed in the previous two steps, in order to validate the significant reduction in energy consumption enabled by the photonic neural-network accelerator.

The proposed research bridges a very important gap between photonic technology, hardware architecture, and machine learning. As such, and due to its cross-cutting nature, it is expected to have far-reaching impacts on the design of next-generation multicore architectures. It will foster new research directions in several areas, spanning computer architecture, optical technology, machine learning algorithms and applications. The research will also play a major role in education by integrating discovery with teaching and training. All the research findings and simulation toolkits will be disseminated to the community via conference and journal publications, and a dedicated website.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
1901165
Program Officer
Almadena Chtchelkanova
Project Start
Project End
Budget Start
2019-08-01
Budget End
2022-07-31
Support Year
Fiscal Year
2019
Total Cost
$600,000
Indirect Cost
Name
George Washington University
Department
Type
DUNS #
City
Washington
State
DC
Country
United States
Zip Code
20052