Login

Project

#40 Low-Cost Vehicle Localization for Driving and Mapping


Principal Investigator
John Dolan
Status
Completed
Start Date
Jan. 1, 2016
End Date
Dec. 31, 2016
Project Type
Research Advanced
Grant Program
MAP-21 TSET National (2013 - 2018)
Grant Cycle
2016 TSET UTC
Visibility
Public

Abstract

Localization is a key component technology for intelligent transportation. Vehicles need to know where they are in order to properly navigate roadways, avoid other traffic, gauge the likelihood of interference with static and dynamic objects in the near future, and determine the position of features in the environment in order to aid in their recognition, classification, and registration. Current production vehicle location systems are based on GPS devices that typically have road-level accuracy (~10 m error), and occasionally far worse accuracy, as in dense urban environments. Research vehicles reduce the typical error to ~10 cm via military-grade GPS (with integrated IMU) costing more than $50K, but are still degraded in urban environments, and are far too expensive for production vehicles. This project aims at the development of a low-cost localization system capable of lane-level accuracy. The resultant system will improve vehicle safety in human-driven, autonomously driven, and V2V scenarios, and is a fundamental building block for production-viable semi-autonomous and autonomous vehicles.    
Description
Problem statement
Localization is a key component technology for intelligent transportation. Vehicles need to know where they are in order to properly navigate roadways, avoid other traffic, gauge the likelihood of interference with static and dynamic objects in the near future, and determine the position of features in the environment in order to aid in their recognition, classification, and registration. Current production vehicle location systems are based on GPS devices that typically have road-level accuracy (~10 m error), and occasionally far worse accuracy, as in dense urban environments. Research vehicles reduce the typical error to ~10 cm via military-grade GPS (with integrated IMU) costing more than $50K, but are still degraded in urban environments, and are far too expensive for production vehicles. A low-cost localization system capable of lane-level accuracy is needed.

Application
Accurate low-cost localization significantly improves safety. Current autonomous vehicles depend on accurate localization combined with a high-quality map; significant localization error can lead to dangerous situations, such as veering or over-turning into neighboring lanes or trying to exit or turn from a non-exiting or non–turning lane. Human-driven vehicles need accurate localization in order to implement various driver assistance functions, including lane-keeping, lane-changing, and collision avoidance. Existing lane-keeping systems depend on visual acquisition of lane markings, which are not always available, and are black boxes that cannot be easily integrated into a full localization system. If V2V communications are able to access and send more accurate vehicle locations, potential accidents can be predicted/avoided and smoother traffic interactions facilitated. Finally, when vehicle sensors are used to detect features from which maps are built, accurate vehicle localization improves map quality.

Approach
This project investigates the use of low-cost sensors to perform vehicle localization for driver-assisted, V2V, autonomous, and map-survey driving. “Low-cost” sensors are sensors whose cost enables them to be put on a production vehicle, or whose cost is believed to be on a path towards production viability. Sensors in this category are as follows. First, an automotive-grade GPS can provide absolute positioning information. Second, several sensors in combination with a vehicle model can provide relative positioning information (odometry): a low-cost IMU, wheel sensors, and a steering angle sensor. Third, position information relative to features in the environment can be gained from camera(s) (e.g., a Mobileye for lane markings) and LIDAR (e.g., to detect curbs, poles, and other physical features). The most expensive and furthest from production relevance of these sensors is the LIDAR, but companies such as Valeo are working towards automotive LIDAR priced at several hundred dollars apiece (http://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-price-ranges-for-robocars-budget-deluxe-and-out-of-sight). 

The goal is then to combine and process the output of this ensemble of sensors in order to produce localization that is continuously “lane-level” accurate, i.e., with no more than several centimeters of error. We already have initial results based on combining GPS and IMU alone via Adaptive Kalman Filtering that reduce the large errors (50+ m) that can occur due to loss of satellites in urban areas like Oakland to less than 10 m. In the proposed work, we will do online estimation of IMU bias and gain to further reduce this error. We will also fuse with the following three additional sources of localization information: odometry based on wheel and steering angle sensors, Mobileye-based lane marking detection, and LIDAR-based landmark detection.

Tasks
1. Use the most promising filtering techniques to achieve the best possible fused low-cost localization (LCL) with low-cost GPS and IMU. Currently the most promising technique is Adaptive Kalman Filtering using a multiple-model-based approach (based on GPS solution quality) combined with adaptive fading to “forget” old residual error, along with refined estimation of calibration parameters for the INS. 2. Combine results of item 1 with odometry information from the wheel and steering sensors. 3. Combine results of item 2 with state information extracted from map matching. This includes both left and right lane markings for lateral positioning and stop lines for longitudinal positioning (longitudinal “snapping” at intersections). 4. Combine results of item 3 with landmark information from LIDAR. 5. Implement developed localization algorithms in C++ on the Cadillac SRX. 6. Conduct extensive experiments comparing LCL to ground truth with various sensor combinations (GPS/IMU alone, adding odometry, adding map matching, adding LIDAR, and other combinations) in various urban driving scenarios on the Cadillac SRX. 7. Document architecture, software implementation, experiments, and results in a final report.

Validation
The developed algorithms will be tested and validated on CMU’s autonomous Cadillac SRX. Ground truth will be obtained in two ways. First, the SRX is equipped with a high-quality, high-cost GPS/IMU whose readings are accurate to 2 cm when and where fixed RTK readings are available. Second, where fixed RTK readings are not available owing to loss of satellites, LIDAR-based landmark detection can be used in real time to find the groundtruth position. Validation tests will be performed on a variety of challenging urban scenarios, with the principal metrics the RMS and maximum error of the estimated position compared to the groundtruth.

Synergy with Other UTC Projects
Although the proposed Low-Cost Localization (LCL) project stands alone, it has important synergy with Christoph Mertz’s simultaneously proposed “Continuous City-Wide Mapping” (CCWM) project. LCL can make good use of two CCWM outputs: detailed maps containing landmarks useful for localization and a stop sign detection algorithm which reliably recognizes a particular landmark type. Conversely, CCWM can install its data collection system in the SRX and compare its mapping results with the ground truth acquired by the SRX’s high-quality GPS/IMU.
Timeline
Duration: 1 year (Jan. 1 – Dec. 31, 2016) High-level schedule:  Mar. 31: Architecture, theoretical development, GPS/IMU validation on SRX  June 30: Add odometry, validate on SRX  Aug. 31: Add map-matching/lane marking detection, validate on SRX  Oct. 31: Add LIDAR-based landmark matching, validate on SRX  Dec. 31: Final report, documentation of hardware/software
Strategic Description / RD&T

    
Deployment Plan
1. Validate a basic capability to perform low-cost localization during the course of the
proposed project.
2. Incorporate the capability into other T-SET projects that will benefit from low-cost localization, such as Christoph Mertz’s “Continuous City-Wide Mapping” project. 3. Low-cost localization is a fundamental building block for production-viable semi-autonomous and autonomous vehicles, so if it can be achieved it will be in high demand
as a basic intelligent vehicle component.
Expected Outcomes/Impacts
Accomplishments 
? Creation of a low-cost localization system with significant safety benefits 
? Validation of the system 
? If possible, integration of the system into Christoph Mertz’s complementary “Continuous City-
Wide Mapping” Project 
? Documentation of the results in a form appropriate for hand-off to a partner capable of full-scale deployment 

Metrics 
? Achieve RMS localization error of <= 5cm, max error of <= 15 cm in urban environments 
? Handle 80% of the urban routes in the Oakland area surrounding CMU and Pitt
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
jmd@cs.cmu.edu Dolan, John Robotics Institute PI Faculty - Research/Systems
jmd@cs.cmu.edu Dolan, John Robotics Institute Other Student - Masters
chiyud@andrew.cmu.edu Dong, Chiyu ECE Other Student - Masters

Budget

Amount of UTC Funds Awarded
$79917.00
Total Project Budget (from all funding sources)
$79917.00

Documents

Type Name Uploaded
Final Report 2016_Dolan.pdf May 11, 2018, 4:21 a.m.
Publication Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor April 19, 2021, 6:44 a.m.
Publication Camera-based semantic enhanced vehicle segmentation for planar lidar April 19, 2021, 6:45 a.m.
Publication Multiagent sensor fusion for connected & autonomous vehicles to enhance navigation safety April 19, 2021, 6:46 a.m.
Publication Lidar and monocular camera fusion: On-road depth completion for autonomous driving April 19, 2021, 6:48 a.m.
Publication Low-cost LIDAR based Vehicle Pose Estimation and Tracking April 19, 2021, 6:49 a.m.
Publication Depth Completion via Inductive Fusion of Planar LIDAR and Monocular Camera April 19, 2021, 6:51 a.m.
Publication Real-time localization method for autonomous vehicle using 3D-LIDAR April 19, 2021, 6:52 a.m.

Match Sources

No match sources!

Partners

No partners!