The automotive industry finds itself at a cross-roads. Current advances in MEMS sensor technology, the emergence of embedded control software, the rapid progress in computer technology, digital image processing, machine learning and control algorithms, along with an ever increasing investment in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies, are about to revolutionize the way we use vehicles and commute in everyday life. Automotive active safety systems, in particular, have been used with enormous success in the past 50 years and have helped keep traffic accidents in check. Still, more than 30,000 deaths and 2,000,000 injuries occur each year in the US alone, and many more worldwide. The impact of traffic accidents on the economy is estimated to be as high as $300B/yr in the US alone. Further improvement in terms of driving safety (and comfort) necessitates that the next generation of active safety systems are more proactive (as opposed to reactive) and can comprehend and interpret driver intent. Future active safety systems will have to account for the diversity of drivers' skills, the behavior of drivers in traffic, and the overall traffic conditions.

This research aims at improving the current capabilities of automotive active safety control systems (ASCS) by taking into account the interactions between the driver, the vehicle, the ASCS and the environment. Beyond solving a fundamental problem in automotive industry, this research will have ramifications in other cyber-physical domains, where humans manually control vehicles or equipment including: flying, operation of heavy machinery, mining, tele-robotics, and robotic medicine. Making autonomous/automated systems that feel and behave "naturally" to human operators is not always easy. As these systems and machines participate more in everyday interactions with humans, the need to make them operate in a predictable manner is more urgent than ever.

To achieve the goals of the proposed research, this project will use the estimation of the driver's cognitive state to adapt the ASCS accordingly, in order to achieve a seamless operation with the driver. Specifically, new methodologies will be developed to infer long-term and short-term behavior of drivers via the use of Bayesian networks and neuromorphic algorithms to estimate the driver's skills and current state of attention from eye movement data, together with dynamic motion cues obtained from steering and pedal inputs. This information will be injected into the ASCS operation in order to enhance its performance by taking advantage of recent results from the theory of adaptive and real-time, model-predictive optimal control. The correct level of autonomy and workload distribution between the driver and ASCS will ensure that no conflicts arise between the driver and the control system, and the safety and passenger comfort are not compromised. A comprehensive plan will be used to test and validate the developed theory by collecting measurements from several human subjects while operating a virtual reality-driving simulator.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1544814
Program Officer
Ralph Wachter
Project Start
Project End
Budget Start
2015-09-15
Budget End
2021-03-31
Support Year
Fiscal Year
2015
Total Cost
$576,001
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332