Neural networks (NN) have led to many recent successes in machine learning (ML). However, this success comes at a prohibitive cost: to obtain better ML models, larger and larger NNs need to be trained and deployed. This is a problem for mobile ML applications, where model training and inference need to be carried out in a timely fashion on a computation-/communication-light, and energy-limited platform. Such applications must run on handheld devices or drones and edge infrastructure and introduce new challenges: the heterogeneity of edge networks, the unreliability of the mobile devices, the computational and energy restrictions on such devices, and the communication bottlenecks in wireless networks. This project will address these challenges by investigating a new paradigm for computation- and communication-light, energy-limited distributed NN learning. Success in this project will produce fundamental ideas and tools that will make mobile distributed learning practical. Further, the project will generate courses and open-education resources that can attract diverse groups of students.

The specific idea investigated is a new class of distributed NN training algorithms, called independent subnetwork training (IST). IST decomposes a NN into a set of independent subnetworks. Each of those subnetworks is trained at a different device, for one or more backpropagation steps, before a synchronization step. Updated subnetworks are sent from edge-devices to the parameter server for reassembly into the original NN, before the next round of decomposition and local training. Because the subnetworks share no parameters, synchronization requires no aggregation?it is just an exchange of parameters. Moreover, each of the subnetworks is a fully operational classifier by itself; no synchronization pipelines between subnetworks are required. Key benefits of the proposed IST are that: i) IST assigns fewer training parameters to each mobile node per iteration, significantly reducing the communication overhead, and ii) each device trains a much smaller model, resulting in less computational costs and better energy consumption. Thus, there is good reason to expect that IST will scale much better than classic training algorithms for mobile applications. The project will investigate how to incorporate/extend IST to various NN architectures, develop new theories that explain the efficiency of IST, and unify theory with practice by proposing hardware-level system implementations that scale up and out for mobile applications.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
2003137
Program Officer
Balakrishnan Prabhakaran
Project Start
Project End
Budget Start
2020-06-01
Budget End
2023-05-31
Support Year
Fiscal Year
2020
Total Cost
$300,000
Indirect Cost
Name
Rice University
Department
Type
DUNS #
City
Houston
State
TX
Country
United States
Zip Code
77005