Login

Project

#439 Computational Imaging for Improving Vehicle Safety


Principal Investigator
Aswin Sankaranarayanan
Status
Active
Start Date
July 1, 2023
End Date
June 30, 2024
Project Type
Research Advanced
Grant Program
US DOT BIL, Safety21, 2023 - 2028 (4811)
Grant Cycle
Safety21 : 23-24
Visibility
Public

Abstract

This project investigates the design and deployment of sensors and associated algorithms for handling harsh imaging conditions. We are particularly interested in depth perception in rain, snow and fog. By expanding the depth range at which objects can be reliably detected, especially in dense fog, the project will facilitate a higher level of safety for vulnerable road users. The project aims to improve perception in both autonomous as well as assisted settings; in the latter, we will augment perception of  human drivers to better identify vulnerable road users in dense fog/rain.

Rain, snow and fog present challenging operating scenarios for camera and LIDAR-based depth perception. For passive cameras, imagery in such weather results in a loss of contrast. In turn, this makes it harder to match features across views, which reduces the effectiveness of stereo-based depth estimation. LIDAR, on the other hand, works on the principle of time of flight, measured by pulsing a laser and using a single-photon avalanche diode (SPAD) to measure the arrival time of the first returning photon. Here, fog and rain generate spurious photon arrivals that severely compromise the quality of depth measurements. 

We approach this problem as one of joint design of sensors and algorithms to overcome the challenges imposed by the physics of image formation. Our main insight is that improved depth perception can be improved via careful imaging and algorithmic design that allows blocking of photons from the medium while preserving those from the scene of interest. This approach is central to very successful microscopy techniques such as confocal imaging, and diffuse optical tomography, for imaging in highly scattering media like biological tissue. We will leverage this core intuition but expand it to macroscopic imaging in the real world. 

Improving perception in dense scattering media by blocking undesired photons requires imaging systems that can selectively choose between favorable light paths in the scene against unfavorable ones. Our proposition is to build a structured light system with a high-speed projector and a ultra-high speed SPAD array. With this setup, we will design patterns that will avoid single-bounce light paths off the medium. An example of this can be seen in prior work by the PI (see Figure 1a) where a laser line is used to illuminate a scene while sensing it with a line. This imaging configuration ensures light at the intersection of laser and the sensor planes (which is a line in the world) is preferred over other light paths induced by scattering. We will leverage work by the PI in high-speed depth imaging using SPAD devices (see Figure-1b); we will expand upon such a concept for the case of scattering media. 

This project will enhance the range of scenarios where a vehicle can safely operate in. Specifically, it will lead to increased range in depth perception in fog and rain, improving safety of vulnerable road users. In subsequent years, we will look at imaging around visual occlusions (like cars) using non-light-of-sight techniques. This will lead to broader adoption of computational imaging with the eventual goal of increasing safety in transportation systems.    
Description

    
Timeline

    
Strategic Description / RD&T
This project falls under "RESEARCH PRIORITY: DATA-DRIVEN SYSTEM SAFETY" (Page 18 of the RD&T plan). 

Specifically, it provides "Safe Design" solution that target the following goal
• Identify and support strategies to increase vulnerable road user safety (e.g., pedestrians, bicyclists, motorcyclists, and people with disabilities). [Page 19]

This is acheived by building tools to see better and farther in adverse weather (fog, rain, and snow), thereby improving safe operation of vehicles and increasing safety of vulnerable road users.


Deployment Plan
Quarter 1
- Build a detailed simulator for fog in autonomous vehicles using physically-accurate ray tracers, allowing for realistic simulation of passive and active depth sensing techniques
- Evaluate effectiveness of conventional depth imaging techniques in terms of depth range in various bad weather scenarios
- Build initial models for proposed imaging instrument in simulations

Quarter 2
- Finalize design of proposed imaging instument
- Perform detailed simulations characterizing performance in various bad weather scenarios
- Characterize performance of detection of vulnerable road users and improvements over conventional designs

Quarter 3
- Build hardware prototype of design
- Develop interfaces to augment human perception in bad weather

Quarter 4
- Deploy on vehicles to obtain real-world performance metrics 
Expected Outcomes/Impacts
Thie project aims to improve perception on vehicles operating in adverse weather conditions with the goal of improving safety of vulnerable road users. Since peception is a primary tool by which we sense the world, improving perception helps both autnonomous vehicle and assisted vehicular technologies. For autonomous vehicles, improving range of perception in bad weather improves safe operation. The technology developed in this project will help assist human drivers to drive safer in bad weather by augmenting their visual perception using measurements made by our system.
Expected Outputs
The primary output of this project will be in the form of novel depth imaging devices and associated algorithms that can image farther in bad weather. So we expect the following tangible outputs
- Imaging hardware, schematics and prototypes 
- Software for processing data released publicly under open source licenses 
- Papers, invention disclosures, and (likely) patents
- Dataset of measurements made in bad weather
- Presentations
TRID
The proposed work primarily focuses on operation of depth imaging in fog, where the range of LIDAR and stereo based systems is restricted.

The TRID system has 22 articles that look at performance degradation of depth sensing in fog. These were generated with the following keywords
- LIDAR in fog
- depth fog
- stereo fog
- computer vision in bad weather

Most, if not all, of these papers look purely at algorithmic techniques for improving depth perception in fog. In contast, the proposed work advances imaging instruments, a place for innovation that has been relatively underexplored.

Individuals Involved

Email Name Affiliation Role Position
kumar@ece.cmu.edu Bhagavatula, Vijayakumar Carnegie Mellon University Co-PI Faculty - Tenured
saswin@andrew.cmu.edu Sankaranarayanan, Aswin Carnegie Mellon University PI Faculty - Tenured

Budget

Amount of UTC Funds Awarded
$98000.00
Total Project Budget (from all funding sources)
$255951.00

Documents

Type Name Uploaded
Data Management Plan ACS_Safety21-DMP.pdf Oct. 15, 2023, 8:14 p.m.
Progress Report 439_Progress_Report_2024-03-31 March 31, 2024, 9:43 p.m.

Match Sources

No match sources!

Partners

Name Type
Ubicept Inc. Deployment Partner Deployment Partner
City of Pittsburgh, Department of Mobility & Infrastructure Equity Partner Equity Partner