Abstract
A major operational expense for an autonomous vehicle (AV) is with capturing, processing, and updating high-definition maps to localize itself when driving. To safely navigate, AVs need to know where they are on a given map to determine their trajectory to the next waypoint. Precise localization is a challenge in GPS-denied areas such as dense urban corridors and motion tracking experiences dropouts in large open spaces such as rural highways. Classic localization algorithms are iterative, and their performance relies on direct feature matching between the stored map and the current sensor observations. This makes them prone to errors in large open spaces which have few distinguishing surface features. They are expensive to run in the vehicle as they account for a large share of the computation cost and power consumption. Better accuracy and faster localization directly improve AV safety as they navigate around people and cluttered environments.
This project will develop an AV localization service that is low-cost, accurate, and can operate in real-time in any AV at a fraction of the computation and power budget of current approaches. In 2023-24 we developed the preliminary version of this localization approach using a specific type of neural networks (i.e. invertible neural networks) to compress the map and lookup the vehicle’s pose efficiently. We demonstrated the accuracy and cost to operate on 1/10th-scale vehicles and benchmarked the performance using localization datasets to benchmark the performance. In 2024-25, we will undertake the real-world evaluation on real AVs with our deployment partner, The Autoware Foundation. We will focus on localization of an electric autonomous goods and person cart for intralogistics for indoor and outdoor navigation. The outcome of this work will result in a portable and easy-to-use localization system for Safety21 projects.
Technical details: AV localization is the problem of finding a robot’s pose using a map and sensor measurements, like LiDAR scans and camera images. However, finding injective mappings between measurements and poses is difficult because sensor measurements from multiple distant poses can be similar. To solve this ambiguity, Monte Carlo Localization, the widely adopted method, uses random hypothesis sampling and sensor measurement updates to infer the pose. Other common approaches are to use Bayesian filtering or to find better distinguishable global descriptors on the map. Recent developments in localization research usually propose better measurement models or feature extractors within these frameworks. In this project, we propose a radically new approach to frame the localization problem as an ambiguous inverse problem and solve it with an invertible neural network (INN). We have recently demonstrated that INNs are naturally suitable for the localization problem with many benefits, in terms of high accuracy (within 0.25m for city-scale maps), high-speed operation (>150Hz) and operate on low-cost embedded system hardware. We will demonstrate this on point-cloud and camera datasets with evaluation on indoor and outdoor localization benchmarks, and also deploy it on real autonomous vehicles around the 23-acre Pennovation campus to show real-time and scalable operation.
Description
Timeline
Strategic Description / RD&T
Precise and fast localization of the autonomous system with respect to obstacles and landmarks is essential to maintaining safety in future transportation systems. The work proposed here is generally applicable across both advanced driver assistance systems, self-driving vehicles, infrastructure-assisted safety systems and is platform agnostic.
Deployment Plan
We will develop this into an open-source toolkit for the robotics and transportation community to use. We will deploy it on a variety of scaled and real autonomous vehicles. We will benchmark the reliability and efficiency on real hardware across multiple driving scenarios.
Q1: Develop 3D Lidar-based Local_INN with benchmarking for Ouster and Velodyne Lidars
Q2: Develop Camera-based Pose_INN with benchmarking on common datasets and demonstration on indoor and outdoor autonomous vehicles
Q3: Release open-source toolbox and host tutorial in robotics and transportation conferences
Q4: Develop a complete INN-based approach which will outperform the current approaches of ROS2-based SLAM implementations.
INN stands for Invertible Neural Network
Expected Outcomes/Impacts
Benefits of Local_INN -
1. Cheap, Fast and Low latency - It employs a Small neural network-based method
2. Accurate localization: Comparable to particle filter at low speed; Higher precision than particle filter at high speed.
3. Expandable from 2D Lidar to 3D Lidar and camera
4. No map file needed. Local_INN compresses the map in the neural network.
5. Fast convergence in Global Localization - this is very important when the vehicle loses localization or just starts up in a new localization.
We will develop this into an open-source toolkit for the robotics and transportation community to use. We will deploy it on a variety of scaled and real autonomous vehicles. We will benchmark the reliability and efficiency on real hardware across multiple driving scenarios.
Expected Outputs
This project, called Local_INN, has four components to the deployment plan:
1. Map Compression: Local_INN provides an implicit map representation and a localization method within one neural network. Map files are no longer needed when localizing. We will develop the basic INN structure for pose inference and map regeneration in this step to demonstrate the functionality for indoor and ooutdoor localization.
2. Uncertainty Estimation: Local_INN outputs not just a pose but a distribution of inferred poses, the covariance of which can be used as the confidence of the neural network when fusing with other sensors, enhancing the overall robustness. We will use this to provide safety guarantees for localization and pose estimation in high-risk driving scenarios.
3. Fast and Accurate: We demonstrate that the localization performance of Local_INN is comparable to particle filter at slow speed and better at high speed with much lower latency with 2D LiDAR experiments. We will conduct extensive experiments to deploy Local_INN on real and scaled autonomous vehicles.
4. Ability to Generalize: We demonstrate that the framework of Local_INN can learn complex 3D open-world environments and provides accurate localization. We also provide an algorithm for global localization with Local_INN. We will work with partners to show how this scheme can work for infrastructure mounted sensors and how multiple of them can be combined.
TRID
In this project we propose a radically new approach to frame the vehicle localization problem as an ambiguous inverse problem and solve it with an invertible neural network (INN). We claim that INN is naturally suitable for the localization problem with many benefits, in terms of high accuracy (within 0.25m for city-scale maps), high-speed operation (>150Hz) and operates on low-cost embedded system hardware. We will demonstrate this on point-cloud and camera datasets with evaluation on indoor and outdoor localization benchmarks, and also deploy it on electric autonomous vehicles to show real-time and scalable operation. While the TRiD has several projects in the general topics of robot localization and vehicle localization, they do not use invertible neural networks. Most use Monte-Carlo and Bayesian iterative approaches and require the map data at runtime. Our approach does not require the map data during inference.
Individuals Involved
Email |
Name |
Affiliation |
Role |
Position |
rahulm@seas.upenn.edu |
Mangharam, Rahul |
University of Pennsylvania |
PI |
Faculty - Tenured |
Budget
Amount of UTC Funds Awarded
$
Total Project Budget (from all funding sources)
$100000.00
Documents
Match Sources
No match sources!
Partners
Name |
Type |
The Autoware Foundation |
Deployment & Equity Partner Deployment & Equity Partner |
Carnegie Mellon School of Engineering |
Deployment & Equity Partner Deployment & Equity Partner |
Carnegie Mellon University |
Deployment & Equity Partner Deployment & Equity Partner |