Moving data, computation, and control into the cloud has been a significant trend in the past decade. However, the Internet of Things (IoT) is generating an unprecedented volume and variety of data and hence, by the time the data makes its way to the cloud for analysis and decision making, the opportunity to act on it might be gone. Moving the data computation and analytics from the cloud to the wireless edge enables the possibility of meeting application delay requirements, improves the scalability and energy efficiency of IoT devices, and mitigates the traffic burden on the network. Machine Learning (ML) at the wireless-network edge, referred to as edge ML, emphasizes leveraging the power of local computing and using edge devices (e.g., smartphones) as edge servers to provide intelligent services (e.g., control and decision making). However, enabling ML at the network edge introduces novel fundamental challenges in terms of the joint design of model training/inference, and communication under latency/privacy. Although the project is aimed at addressing these fundamental problems, it has the potential for long-term broader impacts on basic science, education, and technology. It brings expertise from computing, communication, and machine learning to enable artificial intelligence in the Internet of Things. It also fosters collaboration between industry (e.g., Intel) and academia and hence facilitates technology transfer.

The ideal place to analyze most IoT data appears to be near the devices that produce and act on that data. This calls for distributed, low-latency, and reliable edge ML. The project is tasked with ML using available data at the wireless edge in two high-level scenarios, (i) federated learning, and (ii) collaborative training and inference for network-level intelligence. However, several challenges must be addressed to support edge ML, especially when considering real-time ML over limited wireless bandwidth: (i) Training data is unevenly distributed over a large number of edge devices. (ii) Every edge device has access to a tiny fraction of data and hence not only training but also inference must be carried out collaboratively. (iii) Sharing partially computed models or data across the network raises privacy issues. Therefore, in Edge ML, neural-network model design, training, and inference are entangled with both wireless communication and on-device resource constraints. This project will leverage techniques such as random linear coding, nomographic function representation, and analog joint source-channel coding to develop distributed ML frameworks that are tailored to the desired edge ML tasks, and cognizant of constraints at the edge.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
2003002
Program Officer
Murat Torlak
Project Start
Project End
Budget Start
2020-07-01
Budget End
2023-06-30
Support Year
Fiscal Year
2020
Total Cost
$210,000
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332