#405 Safety Assurance System Utilizing Visual Attention for Advanced Driver-Assistance Systems

Principal Investigator
Yorie Nakahira
Start Date
July 1, 2022
End Date
June 30, 2023
Research Type
Grant Type
Grant Program
FAST Act - Mobility National (2016 - 2022)
Grant Cycle
2022 Mobility21 UTC


We will apply our results from neuroscience and safe control to improve driver-assistance technology as follows. First, we will study the use of visual attention information to detect risks early, before the failure to detect risk-critical obstacles can be identified from the drivers' control action and vehicle states. Second, we will find safer control actions even when conventional methods, which are unnecessarily conservative, become infeasible in finding safe intervention strategies.     
In this proposal, we will explore the application of our parallel projects on neuroscience and safe control theory on driver-assistance technology and, ultimately, autonomous driving technology. The application is expected to improve the current driver-assistance technology in the following two perspectives. 
Conventional driver-assistance is triggered by the vehicle state. In contrast, we will study the use of visual attention information to detect risky events before they can be reflected on the control action or vehicle state. Human drivers must first identify risk-critical obstacles before they can change their control actions with the delay in sensorimotor control, and the control action also needs time to be reflected on the vehicles’ states. This suggests the potential use of visual attention to detect risky events much earlier than when it is reflected in the vehicle state. We aim to achieve early interference of driver-assistance technology in risky events by estimating if human drivers have failed to recognize risk-critical objects based on their visual attention trajectories. Conventionally, such information was believed to be detectable only from the brain signal. The detection of brain signals requires expensive sensors which are considered to be not affordable in standard driver assistance systems. However, recent advancement in neuroscience suggests that such information could be contained in eye movement information. Our parallel project on neuroscience with experts in neuroscience (Dr. Minoru Nakayama, Dr. Sakurada Takeshi) and automobile control (Dr. Fukao Takanori) explore the metrics to quantify if human drivers have recognized certain risk-critical objects based on eye movement measurements only. The proposed projects will study its possible use to enable the early detection of risky events in driving assistance technology.  

The driver-assistance technology and safe planning techniques in autonomous vehicles usually use conventional safe control methods that are unnecessarily conservative. These methods attempt to provide safe solutions for all possible worst-case scenarios and thus often cannot find feasible solutions in many risky scenarios. For example, humans are able to cut into the middle of two vehicles in close proximity, while such methods cannot find safe trajectories to do so. Motivated by the need for less conservative safe control methods, we have developed a set of tools that are efficient enough to compute the safe probability and/or find control actions with high safe probability in real-time. Such methods take a stochastic framework and find safer actions even when the conventional safe control method ends up in dead-lock and infeasible solutions. This proposal will explore the application of such methods in driver-assistance technology and safe planning techniques for autonomous vehicles. 

The milestones of the proposed project are modeling, experiments, and verification. We will aim to complete these milestones in months 1-6, months 3-8, and months 6-12.     
Deployment Plan
Industry partnership: We will jointly investigate this project with Jin Ge (Toyota Research Institute). She holds research projects on Toyota Guardian Driver Assist System, which we will use as use cases. The algorithms developed at Nakahira’s group will also be made publicly available so that more researchers and engineers can build upon our framework. 

Educational outreach: PI Nakahira is teaching two classes: fundamentals of control and autonomous control systems. The insights obtained in this project will be disseminated in the courses as motivating examples to boost students’ interests in learning control techniques and the human sensorimotor control system. 

Expected Accomplishments and Metrics
By quantifying the visual attention for safe driving, we expect to improve the safety feature of driving assistance technology by enabling earlier interference. This early interference is achieved by estimation if human drivers have failed to recognize risk-critical objects based on their visual attention trajectories. Additionally, by applying our recent techniques from stochastic safe control methods, we expect the driving assistance system and/or autonomous vehicle control to find a safer action in highly risky scenarios when conventional safe control methods would yield infeasible solutions. Ultimately, these technologies can lead to affordable and safe driver assistance and autonomous driving technology.      

Individuals Involved

Email Name Affiliation Role Position
jbelke@andrew.cmu.edu Belke, Jordan Carnegie Mellon University Other Other
ksgeiger@andrew.cmu.edu Geiger, Kristen Carnegie Mellon School of Engineering Other Other
shirado@cmu.edu Hirokazu, Shirado CMU Other Faculty - Untenured, Tenure Track
mkoeske@cmu.edu Koeske, Matt CMU Other Staff - Business Manager
ynakahir@andrew.cmu.edu Nakahira, Yorie Carnegie Mellon School of Engineering PI Faculty - Untenured, Tenure Track


Amount of UTC Funds Awarded
Total Project Budget (from all funding sources)


Type Name Uploaded
Data Management Plan DMP_statement.pdf Nov. 19, 2021, 8:43 a.m.
Data Management Plan 2022_03_Data_management_plan.docx March 4, 2022, 7:49 a.m.
Data Management Plan Yorie_Mobility21_Data_Management_Plan_Revised.docx March 8, 2022, 11:21 a.m.
Publication P1_Myopically_Verifiable_Probabilistic_Certificate_for_Long-term_Safety.pdf Sept. 29, 2022, 8 a.m.
Publication P2_Adaptive_Safe_Control_for_Driving_in_Uncertain_Environments.pdf Sept. 29, 2022, 8 a.m.
Publication P3_Probabilistic_safety_certificate_for_multi_agent_systems.pdf Sept. 29, 2022, 8 a.m.
Publication P4_ACC23__Scalable_Safe_Control.pdf Sept. 29, 2022, 8 a.m.
Publication P5_2023_AAAI__game_theory_and_safety.pdf Sept. 29, 2022, 8 a.m.
Presentation Presentation_1_American_control_conference.pdf Sept. 29, 2022, 8 a.m.
Presentation Presentation_2_MIT.pdf Sept. 29, 2022, 8 a.m.
Presentation Presentation_3_Intelligent_vehicle_symposium.pdf Sept. 29, 2022, 8 a.m.
Presentation Presentation_4_Keio_university_22-06.pdf Sept. 29, 2022, 8 a.m.
Presentation Presentation_5_Kyoto_University_22-06.pdf Sept. 29, 2022, 8 a.m.
Presentation Presentation_6_Tokyo_institute_of_technology_22-06.pdf Sept. 29, 2022, 8 a.m.
Presentation presentation_7_Tokyo_university_22-06.pdf Sept. 29, 2022, 8 a.m.
Progress Report 405_Progress_Report_2022-09-30 Sept. 29, 2022, 8 a.m.

Match Sources

No match sources!


Name Type
Toyota Research Institute Deployment Partner Deployment Partner