Self-driving vehicles and mobile robots have the potential to deliver transformative technological and societal changes. In order to make autonomous decisions, nodes need to have a reasonable degree of situational awareness achieved through recognition and tracking of entities in dynamic environments. This may not be possible in partially occluded environments, where individual nodes may have limited visibility, unless nodes participate in collaborative sensing, i.e., share sensed information. Sharing raw/processed real-time sensing data with centralized resources in the cloud or at the network edge poses potentially high communication/computational burdens, particular in safety critical settings requiring low latency. This motivates the need to study distributed collaborative sensing frameworks leveraging powerful algorithms for tracking and deep learning models for reliable recognition/classification tasks. Of particular interest is a characterization of what collaborating sensors can ``see'' in occluded environments and how one should realize information sharing in resource constrained settings to fairly optimize what nodes ``know'', i.e., their situational awareness. The proposed research effort will advance the state-of-the-art in collaborative sensing systems which are expected to benefit the field and society more broadly, through planned efforts in education innovation, achieving diversity, engaging the community and industry, and disseminating results to a wider public.

This proposal centers on the study of collaborative sensing in obstructed/dynamic environments, such as might be used to enable self-driving vehicles and autonomous robots. The central challenge is to achieve an unprecedented level of real-time situational awareness based on distributed sensing resources in a possibly communication and/or computationally constrained setting. The proposed research integrates three research thrusts. The first is the advancement of the fundamental understanding what is visible to sets of distributed sensing units in stochastic environments. This work will leverage stochastic geometric models and analysis to provide robust quantitative performance assessment of `visibility' for typical random environments. The performance limits determined in this research thrust will inform what a distributed system can ``know" in resource constrained settings. The second thrust is the development of fundamental underpinnings of distributed collaborative sensing with a focus on the optimization of interactive information sharing and/or adaptation to changing environmental contexts so as to jointly maximize situational awareness amongst autonomous yet collaborating nodes. We will provide new approaches driven by structural properties of the optimization problems (e.g., submodularity) and interactive information sharing protocols to facilitate distributed object recognition and tracking. The third thrust is the development of a scaled-down platform for controlled and reproducible experimentation of alternative collaborative sensing system designs. The last thrust is not only geared at providing platform to advance the research but is also an activity to engage a substantial number of undergraduates and a springboard to our educational efforts.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2018-08-15
Budget End
2021-07-31
Support Year
Fiscal Year
2018
Total Cost
$450,000
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78759