Abstract
Autonomous vehicles (AVs) continue to make good progress but widespread deployment still seems years away. A primary reason behind this lack of large-scale adoption of AVs despite significant effort and investments is that there are many common "edge" scenarios that arise during day-to-day occurrences but cannot be navigated safely yet by AVs, These scenarios relate to the large number of operating conditions that can be encountered: bad weather conditions, poor lighting conditions (with streetlights being present or not at light), road conditions (e.g. wet/icy/snow-covered driving surfaces, potholes, speed bumps, work zones) and traffic conditions (e.g. dense merging traffic, misbehaving/disabled vehicles, flagmen and jaywalking pedestrians). Some of these conditions like heavy rain or heavy snow may not be encountered in arid and warm regions of the planet but are very common across many temperate and colder regions of the world. Our work at CMU has been systematically trying to tackle and handle each of these dimensions (various weather phenomena as well as different lighting, traffic and road conditions). In this proposed effort, we will continue to advance safe autonomous navigation of work zones and other unsafe scenarios. The key elements of our proposed work include having contextual awareness of (a) the operating environmental driving conditions and (b) sensory and behavioral limitations of the AV under these operating conditions. We therefore propose to add to AVs the ability to recognize and react safely to the operating environment. We believe that this is a critical step to full safe autonomy, which represents our focused pathway to the global adoption of AVs in the coming years.
Description
Timeline
Strategic Description / RD&T
Section left blank until USDOT’s new priorities and RD&T strategic goals are available in Spring 2026.
Deployment Plan
Q1: Design of AI framework to minimize false positives and negatives + Design of a safety framework to find the safest route from an origin to a destination given constraints like workzones along some roads ahead
Q2: Testing in Simulation Environment
Q3: Validation in Relatively Sparse Traffic
Q4: Demonstration in Real-World Traffic
Expected Outcomes/Impacts
AVs being inherently safety-critical systems; self-driving vehicular technologies must not only be technologically reliable but also garner societal acceptance by being trustworthy. The roadway transportation infrastructure that we drive on was originally designed and deployed by humans for humans - meaning that it is heavily dependent on human vision. Correspondingly, cameras are used by AVs to recognize cues, signs and artifacts in the road environment and, machine-learning (using AI) is how AVs interpret camera data. Unfortunately, today's state-of-the-art AI-based computer vision techniques can yield false positives (leading to perilous phantom braking incidents) and false negatives (causing dangerous crashes and fatalities) today as well as in the foreseeable future, One of the key objectives of the proposed effort is to understand and mitigate the causes of false positives and false negatives leading to more trustworthy AI techniques. A complementary objective of our effort is to use other non-camera-based sensors to make an AV be safer. The use of such redundancy is fundamental to safety-critical systems like nuclear power plants, spacecraft and aviation. Last but not the least, an AV must know that it has entered a zone of operating conditions where it cannot guarantee safe driving behaviors and takes actions to mitigate the situation as soon as viable (e.g. by slowing down, pulling over to the side of the road, and stopping with flashers on). We will also supplement the system using vehicular communications technologies (like CV2X) promoted by the US DOT to warn other surrounding vehicles and infrastructure to make the system safer in case of safety concerns that can arise (such as disabled vehicles, malfunctioning traffic signals and work zones to clear debris from an automotive crash).
Expected Outputs
The holy grail of "Vision Zero" with zero fatalities and zero injuries, given human nature to be distracted and not always be focused on the roadways, can be (nearly) accomplished only when vehicles can drive themselves. In turn, members of society who are legally blind or otherwise unable to drive themselves can experience a much better standard of living by not being dependent on others for their transportation needs. From an economic perspective, when vehicles drive themselves, commute times for those driving to and from work can enhance their work productivity and experience the benefits of having a personal chauffeur. However, the journey to full autonomy has been longer than many even in the industry anticipate - meaning that there is a strong need to rethink how the all-too-common "edge" cases are handled. Our proposed effort aims to lay strong foundations and demonstrable capabilities that will enable safe operations and adoption of AVs. Most importantly, our proposed innovations must be realized on a functional but highly complex platform that is capable of driving on real-world road networks. The PI's group has testing permits from both state and city government agencies to test its AVs on public roads. As such, we are uniquely positioned to demonstrate our capabilities to Safety21 sponsors, the research and user communities, as well as visitors to Safety21 and CMU. We will work closely with the Autoware Foundation, a non-profit focused on open-source AV software, to develop and promote the transformational capabilities we proposed to research, develop, test and deploy.
TRID
Individuals Involved
| Email |
Name |
Affiliation |
Role |
Position |
| rajkumar@cmu.edu |
Rajkumar, Raj |
ECE |
PI |
Faculty - Tenured |
Budget
Amount of UTC Funds Awarded
$300000.00
Total Project Budget (from all funding sources)
$6000000.00
Documents
Match Sources
No match sources!
Partners
| Name |
Type |
| RIDC |
Deployment Partner Deployment Partner |