Penetration of technologies such as wireless broadband and artificial intelligence (AI) is propelling a rapid adoption of network cameras across the household, industrial, and commercial sectors. These cameras such as surveillance cameras, dash cameras, and wearable cameras can capture voluminous amounts of visual data that can be turned into valuable information for public safety, autonomous driving, service robots, augmented/mixed reality, assisted living, etc. To reach the potential, new methods are needed for efficiently and effectively extracting, transferring, and sharing useful information from ubiquitous cameras while preserving user privacy. This project uses techniques and perspectives from wireless networking, computer vision, and edge computing to analyze and solve the problems in ubiquitous camera systems, fosters interdisciplinary research, provides a unique training program for undergraduate and graduate students, and has a high potential to introduce transformative technologies that enable new real-life products and services.

This project aims to realize ubiquitous machine vision (UbiVision) and enable efficient utilization of networked cameras for information extraction and sharing. Toward this end, three fundamental research problems are investigated: 1) how to dynamically manage highly coupled resources and functions across multiple technology domains: camera functions, network resources, and computation resources on edge servers; 2) how to design adaptive and efficient machine vision algorithms for resource-constrained smart cameras; and 3) how to engineer reliable machine learning frameworks for robust vision analysis on edge servers. First, a new model-free end-to-end resource orchestration method is designed to improve the efficiency of wireless networking and computing by combining the merits of conventional optimization and emerging machine learning techniques. Second, a novel universal convolution neural network (CNN) and corresponding CNN optimization methods are developed for efficient multi-task feature learning on smart cameras. Third, a teacher-student network learning paradigm is innovated to develop memory and computation efficient machine vision algorithms that are able to achieve robust performance under various adverse conditions caused by varying network conditions and limited server computation budgets.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1910844
Program Officer
Murat Torlak
Project Start
Project End
Budget Start
2019-10-01
Budget End
2022-09-30
Support Year
Fiscal Year
2019
Total Cost
$419,794
Indirect Cost
Name
University of North Carolina at Charlotte
Department
Type
DUNS #
City
Charlotte
State
NC
Country
United States
Zip Code
28223