Login

Project

#582 ViCARUS: Vision-centric Connected Autonomy for Vulnerable Road Users' Safety


Principal Investigator
Vijayakumar Bhagavatula
Status
Active
Start Date
July 1, 2025
End Date
June 30, 2026
Project Type
Research Applied
Grant Program
US DOT BIL, Safety21, 2023 - 2028 (4811)
Grant Cycle
Safety21 : 25-26
Visibility
Public

Abstract

Motivation
In the United States, pedestrian fatalities have risen dramatically, with a 57% increase from 4,779 deaths in 2013 to 7,522 in 2022 (NHTSA, 2024). Intersections remain particularly dangerous, accounting for approximately 25% of all traffic fatalities and 50% of injuries (FHWA, 2024). Vulnerable Road Users (VRUs) - including pedestrians, cyclists, and motorcyclists - are especially at risk in these scenarios. On the other hand, early road tests show autonomous driving systems (Waymo Safety Report, 2024)  have the potential to do better than human drivers safety-wise. However, our previous work under Safety 21 UTC project #440, Connected Vision for Improved Pedestrian Safety (CVIPS) (Bhagavatula et al., 2024) has shown that many state of the art computer vision algorithms, when tested on autonomous driving benchmarks such as nuScenes (nuScenes detection task, 2020) show significant performance degradation when detecting VRUs as shown in Fig 1 of supplementary materials.

Recent studies suggest that connected autonomous vehicles using V2X communication can offer enhanced capability in detecting and responding to potential safety critical scenarios (Gao et al., 2022), particularly in complex urban environments. However, current V2X datasets and methods are limited in their scope to single-vehicle detection. Extending this by benchmarking perception performance for VRUs and other road users in a multi-agent environment is a critical opportunity for improving road safety (Xiang et al., 2024). In addition, As part of CVIPS (Bhagavatula et al., 2024), we simulated the impact of communications limitations (Shenkut & Vijaya Kumar, 2024) such as latency, bandwidth, and communication interruptions. Fig 3 and Fig 4 of supplementary materials show these impacts in different V2X scenarios. As shown in Fig 3, communication interruption from even a single agent causes ~25% performance degradation. Fig 4 illustrates the impact caused by communication latency, where a delay of about 200 ms causes a 10% performance drop in detection.  These communication limitations make it hard to consistently detect and track VRUs, especially in scenarios with occluded views and varying weather conditions.  Building upon the findings from CVIPS, ViCARUS aims to enhance VRU safety through four primary research thrusts:

Primary Research Thrusts
1. Investigation of Detection Performance Gap:  Extending our CVIPS findings regarding camera-only systems' inferior VRU detection performance by analyzing the disparity in detection performance between camera-only systems and sensor fusion methods and  studying specific scenarios where detection gaps are most pronounced and developing mechanisms to minimize these detection gaps through  cooperative perception.

2. Cooperative Perception Framework development: Implementing a connected perception approach where cameras in vehicles and infrastructure units share information via V2X to improve detection performance on VRUs. 

3. Simulation-to-Real Adaptation: Creating comprehensive synthetic datasets using CARLA, developing scenarios not well-represented in existing datasets, using existing real-world datasets such as WTS (Kong et al., 2024), V2X-Real (Xiang et al., 2024), TUMTraf V2X (Zimmer et al., 2024), designing adaptation techniques to bridge the gap between synthetic training data and real-world performance and developing computationally lightweight methods suitable for real-world deployment

4. V2X Communication Analysis: Using advanced C-V2X simulators (e.g., Fraunhofer IIS C-V2XSim platform (Simulation C-V2X, 2024))  to investigate the impact of critical C-V2X parameters such as Bandwidth, latency, packet losses and error rates, GNSS-related location errors and quantifying tradeoffs between communication constraints and system performance

Implementation Strategy
The project will be implemented through the following tasks:
1. Creation of synthetic image sequences and acquisition of existing real datasets with different VRU scenarios
2. Development and testing of deep learning-based cooperative perception algorithms on both synthetic and real datasets 
3.Developing methods to adapt algorithms trained on synthetic data  to real-world datasets (Sim2Real adaptation)
3. C-V2X parameter impact analysis
4. Evaluation under challenging imaging conditions and safety improvement demonstration     
Description

    
Timeline

    
Strategic Description / RD&T
Section left blank until USDOT’s new priorities and RD&T strategic goals are available in Spring 2026.
Deployment Plan
July – September 2025
1.	Technical Brief: Prepare and release a technical brief outlining ViCARUS motivation, methodology, and expected outcomes for improved vulnerable road user (VRU) safety through C-V2X communications.

2.	Project Website Launch: Deploy a dedicated project website featuring downloadable resources, interactive visualizations of detection performance gaps, and regular progress updates for stakeholders.

October – December 2025
1.	Open Dataset Release: Publish available real and simulated data alongside our new CARLA-based VRU dataset on Hugging Face (https://huggingface.co/)  to enable open research. Include documentation for various scenarios and C-V2X configurations, with downloadable training samples.

2.	Research Publication: Release peer-reviewed paper detailing baseline cooperative perception algorithm performance across different VRU classes with comparative analysis against current detection methods.
3.	Participate in Safety21 UTC’s Deployment Partners Conference to showcase the progress of ViCARUS project.

January – March 2026
1.	Hugging Face Model Repository: Deploy an open-access model repository featuring cooperative perception algorithms, pre-trained weights, inference examples, and integration guidelines for V2X-based approaches.

2.	Publication on adapting algorithms from simulation to real data: Publish research on Sim2Real adaptation techniques for connected vehicle systems with a focus on VRUs, with demonstration recordings and implementation guides for transportation research and deployment partners.

April – June 2026
1.	Interactive Demo Platform: Launch a publicly accessible demo web application page showcasing improved VRU detection through C-V2X cooperative perception. The platform will feature visualization of VRU detection with and without cooperative perception, interactive upload capability allowing users to test the system with their own data samples, and a comparative performance dashboard highlighting detection accuracy, latency, and reliability metrics across different C-V2X communication scenarios.

2.	Deployment Guidebook Publication: Release a deployment guidebook with step-by-step implementation strategies, technical requirements, and best practices for deployment partners to adopt ViCARUS technology in their applications.

Expected Outcomes/Impacts
- Enhanced VRU safety through improved detection and prediction capabilities in real-world scenarios
- Robust VRU detection performance in degraded imaging conditions
- Practical insights into C-V2X system constraints and their impact 
- Multi-agent perception dataset with various weather and lighting conditions
- Adaptation techniques to bridge the gap between synthetic training data and real-world performance
- Computationally lightweight methods suitable for real-world deployment

These outcomes will bring closer the vision of connected vehicles leading to increased safety, particularly for vulnerable road users.
Expected Outputs
The anticipated outputs from this project are as follows.

•	Synthetic image and image sequence datasets that include vehicles, pedestrians and other VRUs at intersections equipped with connected cameras, including in challenging imaging conditions.
•	Publications describing the deep learning algorithms for VRU trajectory estimation in images and image sequences collected by multiple cameras at intersections, subject to C-V2X constraints.
•	Software implementations of the developed deep learning algorithms.
•	Technical report quantifying the tradeoffs between multi-agent communication parameters and VRU trajectory estimation accuracy.
TRID


    

Individuals Involved

Email Name Affiliation Role Position
kumar@ece.cmu.edu Bhagavatula, Vijayakumar CMU PI Faculty - Tenured
dshenkut@andrew.cmu.edu Shenkut, Dereje Carnegie Mellon School of Engineering Other Student - PhD

Budget

Amount of UTC Funds Awarded
$79247.00
Total Project Budget (from all funding sources)
$175526.00

Documents

Type Name Uploaded
Data Management Plan VICARUS-DMP_Rousqr6.pdf March 21, 2025, 8:18 a.m.
Presentation ViCARUS_Proposal_ODO0dke.pptx March 21, 2025, 8:22 a.m.

Match Sources

No match sources!

Partners

Name Type
General Motors Deployment Partner Deployment Partner