Login

Project

#156 Sensory Augmentation for Increased Awareness of Driving Environment


Principal Investigator
John Dolan
Status
Completed
Start Date
Jan. 1, 2014
End Date
Jan. 1, 2015
Research Type
Advanced
Grant Type
Research
Grant Program
MAP-21 TSET - Tier 1 (2012 - 2016)
Grant Cycle
2012 TSET UTC
Visibility
Public

Abstract

Abstract:	The goals of this project are to extend the state of the art of vehicle perception systems for use in roadway traffic and develop systems that can model and predict the actions of multiple simultaneous road users so as to identify potentially hazardous situations before they can turn into accidents. We propose augmenting vehicles with sensors and processing capabilities to perceive obstacles (both static and dynamic), predict how those obstacles might move over time, identify locations where unseen hazards might appear, and continually evaluate these values to determine the possibility that an unsafe condition might occur in the immediate future. While the Urban Challenge focused on fully autonomous vehicles, similar perception systems could also be deployed in manually-driven cars that could alert the human driver if an unsafe road condition is approaching. We use behavioral models of traffic to identify the perceived intent of nearby vehicles, use those intent models to predict the most likely future positions of those vehicles and determine whether a potentially unsafe condition may arise in the near future. 

For a vehicle to automatically predict unsafe situations that may occur in traffic, it needs to rely on successfully detecting, tracking, modeling, and predicting the motions of other moving objects (e.g. cars, bicyclists, and pedestrians) in its surroundings. We will develop novel and robust approaches for the modeling and prediction of vehicle motion (Figure 4). Once each object’s intent has been identified, these behavior models can be used to predict a series of “idealized” potential future trajectories. Each of these future trajectories is weighted by the likelihood of it occurring as well as by the potential to cause an unsafe traffic condition. With this information, our proposed perception system will be able to continuously measure the risk of the current situation turning into an unsafe situation that could end up causing an accident. Similarly, by reasoning about the known roadmap on which the vehicles and other road users are operating, evaluating the visible and blind areas of the sensors (e.g. blind corners and obstacles blocking sensors’ views), the perception system can alert the driver they are approaching a potentially unsafe situation in time for the driver to take appropriate actions to remove themselves from the situation (e.g. slow down, change lanes, pull over, etc.).    
Description
Sensory Augmentation for Increased Awareness of Driving Environment  (DOT Goal: Safety; Topic: Technology-Related Research) 
 
The goals of this project are to extend the state of the art of vehicle perception systems for use in roadway traffic and develop systems that can model and predict the actions of multiple simultaneous road users so as to identify potentially hazardous situations before they can turn into accidents. We propose augmenting vehicles with sensors and processing capabilities to perceive obstacles (both static and dynamic), predict how those obstacles might move over time, identify locations where unseen hazards might appear, and continually evaluate these values to determine the possibility that an unsafe condition might occur in the immediate future. While the Urban Challenge focused on fully autonomous vehicles, similar perception systems could also be deployed in manually-driven cars that could alert the human driver if an unsafe road condition is approaching. We use behavioral models of traffic to identify the perceived intent of nearby vehicles, use those intent models to predict the most likely future positions of those vehicles and determine whether a potentially unsafe condition may arise in the near future. 
 
 
Figure 4.Various kinds of automobiles and bicyclists detected. For a vehicle to automatically predict unsafe situations that may occur in traffic, it needs to rely on successfully detecting, tracking, modeling, and predicting the motions of other moving objects (e.g. cars, bicyclists, and pedestrians) in its surroundings. We will develop novel and robust approaches for the modeling and prediction of vehicle motion (Figure 4). Once each object’s intent has been identified, these behavior models can be used to predict a series of “idealized” potential future trajectories. Each of these future trajectories is weighted by the likelihood of it occurring as well as by the potential to cause an unsafe traffic condition. With this information, our proposed perception system will be able to continuously measure the risk of the current situation turning into an unsafe situation that could end up causing an accident. Similarly, by reasoning about the known roadmap on which the vehicles and other road users are operating, evaluating the visible and blind areas of the sensors (e.g. blind corners and obstacles blocking sensors’ views), the perception system can alert the driver they are approaching a potentially unsafe situation in time for the driver to take appropriate actions to remove themselves from the situation (e.g. slow down, change lanes, pull over, etc.). 
 
Desired Outcomes and Metrics Year 1: (a) Integrate visual detection/tracking algorithms. (b) Train a suite of visual object classification and tracking models that extend to the full set of road users (cars, motorcycles, bicycles, and pedestrians). (c) Probabilistic behavior models for each of the object types that include multiple different intents that can be applied in different traffic situations. Year 2: (a) A classifier to evaluate multiple potential intents for each category of tracked object and identify, over a short window of trajectory history, the target’s most likely intent. (b) A classifier capable of reasoning over the set of observed intents to determine the likelihood of a hazardous situation and report the best mitigation strategy to avoid the hazard. 
 
Capabilities and Experience Lead: Dr. Paul Rybski (CMU). Dr. Rybski was the perception lead on Carnegie Mellon’s 1stplace winning entry to the DARPA Urban Challenge and he is continuing his research into automated vehicle perception with collaborative efforts sponsored by General Motors. He has been very active on several projects that focus on perception for automated intent recognition (recognizing intents of both humans and vehicles) that were sponsored by DARPA, L3Com, NSF, and most recently, Draper Labs. 

 This research is funded in part by the U.S. Department of Transportation’s University Transportation Centers Program.     
Timeline
February 2012 - December 2013    
Deployment Plan
http://utc.ices.cmu.edu/utc/Rybski%20project%20description.pdf     
Expected Accomplishments and Metrics
-    

Individuals Involved

Email Name Affiliation Role Position
jmd@cs.cmu.edu Dolan, John General Motors PI Faculty - Tenured

Budget

Amount of UTC Funds Awarded
$217859.00
Total Project Budget (from all funding sources)
$217859.00

Documents

Type Name Uploaded
Final Report Sensory_Augmentation_for_Increased_Awareness_of_Driving_Environment_CEPUJLB.pdf March 21, 2018, 8:27 a.m.
Publication Continuous behavioral prediction in lane-change for autonomous driving cars in dynamic environments April 19, 2021, 6:56 a.m.
Publication DLT-Net: Joint detection of drivable areas, lane lines, and traffic objects April 19, 2021, 6:58 a.m.

Match Sources

No match sources!

Partners

No partners!