Deep neural networks (DNNs) have achieved remarkable success in many applications because of their powerful capability for data processing. The objective of this project is to investigate a software-hardware co-design methodology for DNN acceleration that can be applied to both traditional von Neumann and emerging neuromorphic architectures. The project fits into the general area of "brain-inspired" energy efficient computing paradigms that has been of much recent interest. The investigators are also active in various outreach and educational activities that include curricular development, engagement of minority/underrepresented students in research. Undergraduate and graduate students involved in this research will also be trained for the next-generation computer engineering and semiconductor industry workforce.

From a more technical standpoint, a novel neural network sparsification process is to be explored to preserve the state-of-the-art accuracy, while establishing hardware-friendly models of neural network computations. The result is expected to lead to a holistic methodology composed of neural network model sparsification, hardware acceleration, and an integrated software/hardware co-design. The project also benefits big data research and industry at large by inspiring an interactive design philosophy between the design of learning algorithms and the corresponding computational platforms for system performance and scalability enhancement.

Project Start
Project End
Budget Start
2016-07-01
Budget End
2017-06-30
Support Year
Fiscal Year
2016
Total Cost
$450,000
Indirect Cost
Name
University of Pittsburgh
Department
Type
DUNS #
City
Pittsburgh
State
PA
Country
United States
Zip Code
15260