Annually, road accidents contribute to approximately 1.35 million fatalities globally. Automated driving and driver assist technology can greatly increase vehicular safety. However, for these technologies, ensuring dependability over a broad set of unusual traffic situations remains a key challenge. To be socially acceptable, these technologies must match or exceed human driving safety levels (e.g. 100 million miles between fatalities). Today, each autonomous vehicle relies on its own sensors to perceive the environment and make independent driving decisions. Unfortunately, these sensors rely on line-of-sight perception and a vehicle's view can be blocked by other vehicles.
With advanced wireless communication technologies such as 5G and Qualcomm's Cellular-V2X Technologies, vehicles will be able to share their sensor data with other vehicles, either directly or using roadside infrastructure, so that vehicles can effectively see through obstacles, a capability we call network-enabled cooperative perception. While network-enabled cooperative perception is a compelling technology, the network remains a fundamental bottleneck. Today's vehicular communication technologies can, in practice, achieve about 6-10 Mbps, but advanced vehicular sensors generate hundreds of Mbps of raw data. This project seeks to resolve the tension between the richness of the raw sensor data and the network bottleneck, while scaling network-enabled cooperative perception to extremely dense traffic situations with multi-modal traffic such as pedestrians, bicycles, three-wheelers, trucks, cars etc. in which not all participants may be sensor-equipped.
The project will develop abstractions, algorithms, and tools for network-enabled cooperative perception at scale, building upon an abstraction called a glimpse, which is a processed representation of a part of a vehicle's sensor view. Glimpses can represent individual objects within the view, or a grid in 3D space, and can be represented at different granularities that trade-off bandwidth for detail. Given this abstraction, the project will develop: methods to determine, in a complex and highly dynamic traffic setting, which glimpse representations are needed for which vehicles and by when; scheduling algorithms to coordinate the transmission of these glimpses to vehicles while respecting channel capacity constraints; methods to train machine learning models to make control decisions using glimpse-enhanced composite views; and techniques to ensure robustness to glimpse poisoning.
Beyond the societal advantages resulting from reliable autonomous driving technology as enabled by network-enabled cooperative perception, the project will incorporate the results of the research into curricula, participants will mentor undergraduates and contribute to efforts to broader participation in computing, specifically seeking to expose students from under-represented groups in the Crenshaw area of South-Central Los Angeles, and students in the Montebello Unified School district to exciting topics in computing. Collaboration with General Motors will ease the path towards technology transfer and will expose PhD students to topics relevant to automotive systems.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.