Login

Project

#339 Vehicle Trajectory and Gap Estimation for Conflict Prediction


Principal Investigator
Alex Hauptmann
Status
Active
Start Date
July 1, 2020
End Date
Dec. 31, 2021
Research Type
Applied
Grant Type
Research
Grant Program
FAST Act - Mobility National (2016 - 2022)
Grant Cycle
2020 Mobility21 UTC
Visibility
Public

Abstract

We propose a system to identify and measure vehicle trajectories, speed, inter-vehicle gaps and using video feeds from arbitrary traffic surveillance cameras. The potential impact to transportation safety is the ability to detect crashes in real-time and capturing near crashes and their context. Real-time analysis allows for immediate notification, detecting traffic density and speeds. Alerting safety planners to near-miss crash events and related contextual information, could provide critical information for safety enhancement through appropriate infrastructure safety modifications.    
Description
We propose to  build a system to identify and measure vehicle trajectories, speed, inter vehicle gaps and vehicle-pedestrian gaps using video feeds from arbitrary traffic surveillance cameras, most of which will not have been specially calibrated and where no training data is available. Based on previous research, it appears possible to create a system that can monitor a video stream in real-time for this purpose. 
The potential impact of such a system to transportation safety is in the ability to detect car crashes This allows real-time capturing near crashes . in real-time, allowing for immediate notification, detecting traffic density and speeds, and alerting safety planners to near-miss crash events and related contextual information, which could be mitigated through appropriate infrastructure safety modifications. The approach is to automatically estimate the intrinsic and extrinsic camera parameters, which allows a 3-D reconstruction of the camera field of view. Given this reconstruction, exact estimates of vehicle or other traffic participant location, speed and spacing are now possible. 
Based on the 3D reconstruction of the road plane and prediction of trajectories, the method will achieve prediction, detection and mensuration 
without specific labeled training data of relevant vehicle or vehicle-pedestrian interaction events. Since gap distance is important for predicting vehicle flow, measuring gap distance from a roadside camera aid with traffic operations can be very useful.
By peeking into the kinematic states in the monocular 3D representations of vehicles and pedestrians, our system will have much better performance than a system based on hand-annotated training-data from specific cameras and views. 
The system will work in several steps. 
In step 1, we build an approach to robustly estimate the extrinsic and intrinsic camera parameters, which gives us the camera model which puts the camera at the original point, with a distance of its focal length from the observed image. Within the world coordinate system, we can then estimate the road plane.
Step 2 is the reliable detection and tracking of vehicles and pedestrians, for which several approaches have been proposed. Methods also exist to estimate a 3-D bounding box around a vehicle or a pedestrian, which we can then transform into our world coordinate system. 
Step 3 is the trajectory prediction of the vehicle or the pedestrian. This will be based on our previous work on "Predicting Future Person Activities and Locations in Videos" presented at CVPR2019. 
After the core system is implemented and made to work robustly over several data sets of traffic accidents and real traffic cameras. 
At this point we will also improve the efficiency of the analytics process to allow real-time results with minimal latency using a single computational device.  
In later stages of the research, we will investigate scenarios where the vehicle crashes are due to changes in heading or speed.  
Finally, we will consider extending the work to pan, tilt, zoom cameras which have a changing field of view.      
Timeline
Q1 - Collection of publicly available video data sets recorded from traffic cameras, definition of base test data
        Implementation of automatic camera parameter estimation and world coordinate mapping of the road plane
        Initial car and pedestrian detection with 3D bounding boxes
        Initial Speed and trajectory predictions
        Coordination with FHWA partners on the key desired functionality and metrics of interests
        Milestone: baseline accuracy of speed, location, trajectory estimations
Q2 - Analysis of car and pedestrian interactions
        Expanding to more diverse traffic data sets
        Improving efficiency to real-time processing with low latency
Q3 - Analysis of car motions with changes of headings and speed
        Analysis of the context of car crashed in the datasets
Q4 - Extending the system to pan/tilt/zoom cameras. 
        Documenting the system code
        Making the code portable using Docker images
        Report on final evaluations of developed system     
Deployment Plan
We expect to deliver code to the FHWA partner at the end of every quarter. 
Based on their feedback, we will modify our research plan to accommodate their suggestions    
Expected Accomplishments and Metrics
Accurate camera parameter estimation and 3D reconstruction using the ground plane
Accurate pedestrian and vehicle detection 
3D bounding boxes for the detected objects
Accurate speed estimation
Accurate trajectory prediction 
Successful transfer of research models and code to FHWA
    

Individuals Involved

Email Name Affiliation Role Position
alex@cs.cmu.edu Hauptmann, Alex CMU PI Faculty - Researcher/Post-Doc
lijun@cmu.edu Yu, Lijun CMU Other Student - Masters

Budget

Amount of UTC Funds Awarded
$98551.00
Total Project Budget (from all funding sources)
$98551.00

Documents

Type Name Uploaded
Data Management Plan DataManagementPlan_Vehicle_Trajectory_and_Gap_Estimation_for_Conflict_Prediction.pdf March 26, 2020, 5:54 a.m.
Publication Traffic Danger RecognitionWith Surveillance Cameras Without Training Data Sept. 30, 2020, 3:07 p.m.
Publication ELECTRICITY: An efficient multi-camera vehicle tracking system for intelligent city Sept. 30, 2020, 3:07 p.m.
Publication Training-free monocular 3d event detection system for traffic surveillance Sept. 30, 2020, 3:07 p.m.
Publication Argus: Efficient Activity Detection System for Extended Video Analysis Sept. 30, 2020, 3:07 p.m.
Publication Qian_Adaptive_Feature_Aggregation_for_Video_Object_Detection_WACVW_2020_paper2.pdf Sept. 30, 2020, 3:07 p.m.
Presentation Zero-VIRUS: Zero-shot vehicle route understanding system for intelligent transportation Sept. 30, 2020, 3:09 p.m.
Presentation ELECTRICITY: An efficient multi-camera vehicle tracking system for intelligent city Sept. 30, 2020, 3:24 p.m.
Presentation Argus: Efficient Activity Detection System for Extended Video Analysis Sept. 30, 2020, 3:24 p.m.
Presentation Training-free monocular 3d event detection system for traffic surveillance Sept. 30, 2020, 3:24 p.m.
Presentation Traffic Danger Recognition with Surveillance Cameras Without Training Data Sept. 30, 2020, 3:24 p.m.
Progress Report 339_Progress_Report_2020-09-30 Sept. 30, 2020, 3:37 p.m.
Publication Accident forecasting in CCTV traffic camera videos Feb. 28, 2021, 5:54 a.m.
Publication CMU Informedia at TRECVID 2020: Activity Detection with Dense Spatio-temporal Proposals March 30, 2021, 6:18 p.m.
Presentation CMU Informedia at TRECVID 2020: Activity Detection with Dense Spatio-temporal Proposals March 30, 2021, 6:18 p.m.
Presentation Real-time Activity Detection in Unknown Facilities with Dense Spatio-temporal Proposals March 30, 2021, 6:18 p.m.
Progress Report 339_Progress_Report_2021-03-31 March 30, 2021, 6:19 p.m.

Match Sources

No match sources!

Partners

Name Type
FHWA Deployment Partner Deployment Partner
Common Caches LLC Deployment Partner Deployment Partner
Quality Counts LLC Deployment Partner Deployment Partner