Login

Project

#301 Some Core Techniques for Safe Autonomous Driving


Principal Investigator
Raj Rajkumar
Status
Overdue Project
Start Date
Nov. 1, 2019
End Date
June 30, 2023
Project Type
Research Advanced
Grant Program
FAST Act - Mobility National (2016 - 2022)
Grant Cycle
2019 Mobility21 UTC
Visibility
Public

Abstract

Reliable curb detection is critical for safe autonomous driving in urban contexts. Curb detection and tracking are also useful in vehicle localization and path planning. Past work utilized a 3D LiDAR sensor to determine accurate distance information and the geometric attributes of curbs. However, such an approach requires dense point cloud data and is also vulnerable to false positives from obstacles present on both road and off-road areas. In this effort, we propose an approach to detect and track curbs by fusing together data from multiple sensors: sparse LiDAR data, a mono camera and low-cost ultrasonic sensors. The detection algorithm will be based on a single 3D LiDAR and a mono camera sensor used to detect candidate curb features and it effectively removes false positives arising from surrounding static and moving obstacles. The detection accuracy of the tracking algorithm is boosted by using Kalman filter-based prediction and fusion with lateral distance information from low-cost ultrasonic sensors. 

We will also conduct a complementary effort with the following goals. Autonomous vehicles promise significant advances in transportation safety, efficiency and comfort. However, achieving the goal of full autonomy is impeded by the need to address several operational challenges encountered in practice. Gesture recognition of flagmen on roads is one such set of challenges. An autonomous vehicle needs to make safe decisions and facilitate forward progress in the presence of road construction workers and flagmen. However, human gestures under diverse environmental conditions are very varied and represent significant complexity. In this effort, we propose (i) a taxonomy of challenges for organizing traffic gestures, (ii) a sizeable flagman gesture dataset, and (iii) extensive experiments on practical algorithms for gesture recognition. We will categorize traffic gestures according to their semantics, flagman appearances, and the environmental context. We will then collect a dataset covering a range of common flagman gestures with and without props such as signs and flags. Finally, we will develop a recognition algorithm using different feature representations of the human pose and perform extensive ablation experiments on each component.    
Description
Complex and dynamic road environments pose substantial challenges in guaranteeing safety for vehicle localization and path planning. For example, moving or parked vehicles, construction zones and urban roads change the geometric characteristics of the environment. This in turn can lead to large and unsafe localization and path planning errors. A curb can be deemed as a strong road feature that delimits the road boundaries, and can provide rich information for use in vehicle positioning. Therefore, the ability to detect road curbs is paramount to ensure the safe navigation of autonomous vehicles. A 3D LiDAR sensor has been used in recent work to determine accurate distance information and the geometric characteristics of curbs. Small-form factor LiDAR sensors such as the Velodyne VLP-16 have been used to reduce sensor costs and to yield a better appearance. However, the lower resolution of the LiDAR creates the need to detect curbs using only sparse points on a curb. We need a solution based on sensor fusion to detect and track road curbs using sparse 3D LiDAR, a mono camera, and ultrasonic sensors in real time. This algorithm is to be developed for use in Carnegie Mellon University’s autonomous driving research vehicle.

For an autonomous vehicle to also navigate safely, it must follow the rules of the road and abide by traffic control signals. These control signals are typically from road infrastructure such as traffic lights and lane markers. However, in the presence of a work zone, a flagman may provide temporary traffic control instead. A flagman uses hand gestures, often in conjunction with a prop such as a sign paddle, to guide vehicles safely through the zone. A work zone can also occur on short notice due to traffic emergencies or road maintenance, making it difficult for a map database to be updated on time. Therefore, an autonomous vehicle needs to recognize these traffic gestures on-the-fly. Understanding traffic gestures is a challenging task for computers. A flagman might wear different uniforms according to the weather condition. When a good samaritan acts as a flagman, one may not wear a uniform at all. Besides clothing, flagmen naturally have different body sizes and skin colors. As there is no strict standard, traffic gestures are often flexible, and flagmen have their individual styles. Sometimes, these actions can be as subtle as finger movements, or in rarer cases emulate dance moves.  Moreover, a flagman commonly uses props together with body poses. For example, a flagman may carry a reflective sign paddle or a light stick to make the signal more noticeable at night. Therefore, the recognition system must perceive a large variety of human body poses and their interactions with miscellaneous objects. Furthermore, the recognition system must run in real-time with computationally constrained resources on the vehicle.
Timeline
The project components started in Q3 2019 and are expected to be completed Q1 2021.
Strategic Description / RD&T

    
Deployment Plan
We have been working closely with General Motors on the curb detection and flagman detection schemes.  We will provide demonstration videos of these schemes running in real-time.
Expected Outcomes/Impacts
Expected Accomplishments in Flagman Recognition: 

1. A taxonomy of challenges related to varied flagman traffic gestures.

2. The development of a dataset that covers a range of common flagman gestures, including the use of props. 

3. A recognition technique that uses keypoints, hand images, and object bounding boxes to represent gestures. 

Metrics for Flagman Recognition:

o Detection accuracy compared to approaches using raw images or heatmaps. 

Expected Accomplishments for Curb Detection:

1. A real-time solution to detect and track curb lines using multiple sensors: sparse LiDAR data, a mono camera, and low-cost ultrasonic sensors. 

2, A new regression algorithm to provide robust curb line-fitting results. 

3. Evaluation of the benefit of low-cost Ultrasonic sensors by fusion with lateral distance information. 

4.  The deployment of the technique on a real car and testing in real-world situations. 

Metrics for Curb Detection:
o detection accuracy and precision was measured for various combinations of active modules. 
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
ibaek@andrew.cmu.edu Baek, Iljoo Carnegie Mellon University Other Student - PhD
swapnild@andrew.cmu.edu Das, Swapnil Carnegie Mellon University Other Student - PhD
mengwen2@andrew.cmu.edu He, Mengwen Carnegie Mellon University Other Student - PhD
rajkumar@cmu.edu Rajkumar, Raj Carnegie Mellon University PI Faculty - Tenured
weijings@andrew.cmu.edu Shi, Weijing Carnegie Mellon University Other Student - PhD

Budget

Amount of UTC Funds Awarded
$224572.00
Total Project Budget (from all funding sources)
$224572.00

Documents

Type Name Uploaded
Data Management Plan Data_Management_Plan_-_Some_Core_Techniques_for_Safe_Autonomous_Driving.pdf Feb. 11, 2021, 8:56 a.m.
Publication CurbScan: Curb Detection and Tracking Using Multi-Sensor Fusion Oct. 24, 2021, 8:55 p.m.
Publication Human-Robot Cooperation for Autonomous Vehicles and Human Drivers: Challenges and Solutions Oct. 24, 2021, 8:56 p.m.
Publication Control systems and methods using parametric driver model April 10, 2023, 8:02 p.m.
Publication A Clustering and Image Processing Approach to Unsupervised Real-Time Road Segmentation for Autonomous Vehicles April 10, 2023, 8:02 p.m.
Publication Enhancing Flagman Gesture Recognition with Virtual Augmentation April 10, 2023, 8:03 p.m.
Publication Self-supervised pretraining for point cloud object detection in autonomous driving April 10, 2023, 8:04 p.m.
Publication Connected and automated vehicles, driving systems, and control logic for info-rich eco-autonomous driving April 10, 2023, 8:05 p.m.
Publication Safe intersection management with cooperative perception for mixed traffic of human-driven and autonomous vehicles April 10, 2023, 8:05 p.m.
Publication A-DRIVE: Autonomous Deadlock Detection and Recovery at Road Intersections for Connected and Automated Vehicles April 10, 2023, 8:06 p.m.
Publication Time-Sensitive Cooperative Perception for Real-Time Data Sharing over Vehicular Communications: Overview, Challenges, and Future Directions April 10, 2023, 8:07 p.m.
Publication Cyber traffic light: safe cooperation for autonomous vehicles at dynamic intersections April 10, 2023, 8:08 p.m.
Publication LaneMatch: A Practical Real-Time Localization Method Via Lane-Matching April 10, 2023, 8:08 p.m.
Publication Connected and autonomous vehicles as a grand challenge for middleware in cyber-physical systems April 10, 2023, 8:09 p.m.
Publication Extended VINS-MONO: A systematic approach for absolute and relative vehicle localization in large-scale outdoor environments April 10, 2023, 8:09 p.m.
Publication Work zone detection for autonomous vehicles April 10, 2023, 8:10 p.m.
Publication Human-robot cooperation for autonomous vehicles and human drivers: Challenges and solutions April 10, 2023, 8:11 p.m.
Publication Practical Object Detection Using Thermal Infrared Image Sensors April 10, 2023, 8:11 p.m.
Publication Multicruise: eco-lane selection strategy with eco-cruise control for connected and automated vehicles April 10, 2023, 8:12 p.m.
Publication Opportunities and challenges for flagman recognition in autonomous vehicles April 10, 2023, 8:12 p.m.

Match Sources

No match sources!

Partners

Name Type
General Motors Deployment Partner Deployment Partner