In the next generation of big science experiments, the demands for computing resources are expected to outstrip the capabilities of existing computing infrastructure. In light of this, a radical rethinking of the cyberinfrastructure is needed to contend with these developments. With the onset of deep learning, parallelized processing architectures have emerged as a solution. Combined with deep learning algorithms, parallelized processing architectures, in particular, Field Programmable Gate Arrays (FPGAs) have been shown to give large speedups in computing when compared with conventional CPUs. This project aims to bring machine learning based accelerated computing with FPGAs into the scientific community by targeting two big-data physics experiments: the Large Hadron Collider (LHC) and the Laser Interferometer Gravitational-wave Observatory (LIGO). This project will push the frontiers of deep learning at scale, demonstrating the versatility and scalability of these methods to accelerate and enable new physics in the big data era. This project serves the national interest, as stated by NSF's mission, by promoting the progress of science. The PIs and their collaborators will build upon their recent work to design and exploit state-of-the-art neural network models for real-time data analytics, reducing overall computing latency. This new computing paradigm aims to significantly increase the processing capability at the LHC and LIGO, leading to an increased scientific output of these devices and, potentially, foundational discoveries. The students to be mentored and trained in this research will interact closely with industry partners, creating new career opportunities, and strengthening synergies between academia and industry. In addition to sharing algorithms with the community through open source repositories, the team will continue to educate the community regarding credit and citation of scientific software.
In this project, the PIs will build upon their recent work developing high quality deep learning algorithms for real-time data analytics of time-series and image datasets using Field Programmable Gate Arrays (FPGAs) to accelerate low-latency inference of machine learning algorithms. The team will develop machine learning based acceleration tools focusing on FPGAs to be used within LIGO and the LHC experiments. The team's immediate goal is to take benchmark examples of LHC high level trigger processing and LIGO gravitational wave processing and construct demonstrators in each scenario. For this benchmark, they aim to design and implement an FPGA based accelerator that can perform low latency gravitational wave identification and LHC event reconstruction. Additionally, the PIs aim to add the capability of graph based neural network accelerators for FPGAs. The open source tools to be developed as part of these activities will be readily shared with LIGO, LHC, and LSST. The project will create an advisory group, including members of large and small projects, members of the neutrino physics, multi-messenger astronomy community, industry partners, computer scientists, and computational biologists. This project aims to bring together representatives of the different communities that will benefit from and can contribute to this work. The PIs will organize deep learning workshops and boot camps to train students and researchers on how to use and contribute to the framework, creating a wide network of contributors and developers across key science missions. This project is part of the National Science Foundation's Harnessing the Data Revolution (HDR) Big Idea activity.
This project is part of the National Science Foundation's Harnessing the Data Revolution Big Idea activity. The effort is jointly funded by the Office of Advanced Cyberinfrastructure.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.