Abstract
This project will explore several potential applications of image processing, including NN/deep learning technologies, to the analysis of traffic scenes involving passenger and transit vehicles. We outline three potential applications below- the exact distribution of effort and topics addressed will depend on the availability of student and research staff.
1. Investigating the Safety and Robustness of End-to-End Vision-Based Automated Driving Systems
2. Methodologies for extracting information from transit vehicle video for traffic flow estimation
3. Optical flow for automated vehicle lane following
Description
1. A. Investigating the safety and robustness of end-to-end vision-based automated driving systems
Increasing the mobility of people and goods via automated transportation requires robust and safe intelligent vehicle systems. Computer vision is one field that can create new frontiers towards this end. The exteroceptive capabilities of vision-based scene perception algorithms is approaching human-level performance. However, mapping the image space to the vehicle control space can still be considered an open problem. The more recent, end-to-end Deep Reinforcement Learning (DRL) based automated driving algorithms have shown promising results for mapping image pixels to vehicle control signals. However, pure learning-based approaches lack the hard-coded safety measures of model-based counterparts. We propose a hybrid approach for integrating a model-based path planning pipe into a vision based DRL framework to alleviate the shortcomings of both worlds. Our overall research objectives are to develop novel algorithms to integrate planning into deep reinforcement learning frameworks. We will identify the variables needed for our methodologies and develop, evaluate, and over time improve algorithms to increase the robustness of learning-based autonomous driving systems.
2. Methodologies for extracting information from transit vehicle video for traffic flow estimation
Transit vehicles cover a nontrivial fraction of the road network as they follow their assigned routes and have the potential to act as traffic probes providing observations distributed over significant periods of time and space. Many transit systems have installed cameras and video capture systems that, as a byproduct of their primary monitoring and safety functions, also capture traffic scenes and the presence and motion of vehicles, bicycles, and pedestrians surrounding the transit vehicle. We will explore working with image processing research collaborators at CMU and OSU to share video collected from the OSU TTM CABS campus transit system, identify the variables needed for our traffic flow estimation methodologies, and develop, evaluate, and over time improve algorithms and methodologies to automatically process CABS video data. These results will be evaluated and compared with results generated using our existing techniques. Possibilities for expansion into new traffic variables, for example extracting vehicle speeds which are currently extracted from cost prohibitive LIDAR sensors, will be explored. We expect the developments and applications of this activity to allow us to deliver useful information to a stakeholder (OSU TTM) and to demonstrate the potential for large scale implementation.
3. Optical flow for automated vehicle lane following
We will examine the application of optical flow with sliding mode control for vehicle navigation and compare traditional methods of optical flow extraction to NN/Deep Learning approaches. This will initially be done in Carla, a driving simulator with virtual reality environment capabilities, and may later be extended to the Transportation Research Center testing facility. Topics of interest include optical flow and focus of expansion, road and road boundary potential field design, and gradient tracking sliding mode controller for lateral and longitudinal control.
Timeline
Deployment Plan
Expected Accomplishments and Metrics
Individuals Involved
Email |
Name |
Affiliation |
Role |
Position |
hillstrom.7@osu.edu |
Hillstrom, Stacy |
The Ohio State University |
Other |
Other |
redmill.1@osu.edu |
Redmill, Keith |
The Ohio State University |
PI |
Other |
yurtsever.2@osu.edu |
Yurtsever, Ekim |
The Ohio State University |
Co-PI |
Faculty - Researcher/Post-Doc |
Budget
Amount of UTC Funds Awarded
$327425.00
Total Project Budget (from all funding sources)
$516517.00
Documents
Type |
Name |
Uploaded |
Data Management Plan |
dmp-Redmill-2019.docx |
May 1, 2019, 1:57 p.m. |
Project Brief |
Redmill_et_a_2019_slides.pptx |
May 10, 2019, 8:07 a.m. |
Progress Report |
304_Progress_Report_2019-09-30 |
Oct. 1, 2019, 7:52 a.m. |
Publication |
Visual potential field based control for autonomous navigation in unseen environment |
March 31, 2020, 2:57 p.m. |
Publication |
OpticalFlow_IV2020_final.pdf |
April 3, 2021, 12:54 p.m. |
Publication |
Risky_Action_Recognition_In_Lane_Change_Video_Clips_Using_Deep_Spatiotemporal_Networks_With_Segmentation_Mask_Transfers.pdf |
March 31, 2020, 3:06 p.m. |
Progress Report |
304_Progress_Report_2020-03-31 |
March 31, 2020, 3:07 p.m. |
Publication |
Photorealism_in_Driving_Simulations_Blending_Generative_Adversarial_Image_Synthesis_With_Rendering.pdf |
Oct. 3, 2022, 8:44 a.m. |
Publication |
IV2020_0252_FI3.pdf |
April 3, 2021, 12:54 p.m. |
Publication |
sensors-21-04608.pdf |
Oct. 6, 2021, 1:33 p.m. |
Progress Report |
304_Progress_Report_2020-09-30 |
Sept. 30, 2020, 4:30 p.m. |
Publication |
Faraway-Frustum_Dealing_with_Lidar_Sparsity_for_3D_Object_Detection_using_Fusion.pdf |
March 29, 2022, 8:07 a.m. |
Publication |
Predicting_Pedestrian_Crossing_Intention_With_Feature_Fusion_and_Spatio-Temporal_Attention.pdf |
Oct. 3, 2022, 8:45 a.m. |
Publication |
A_Modeled_Approach_for_Online_Adversarial_Test_of_Operational_Vehicle_Safety.pdf |
March 29, 2022, 8:10 a.m. |
Progress Report |
304_Progress_Report_2021-03-31 |
April 3, 2021, 12:57 p.m. |
Publication |
Temp-Frustum_Net_3D_Object_Detection_with_Temporal_Fusion.pdf |
March 29, 2022, 8:16 a.m. |
Progress Report |
304_Progress_Report_2021-09-30 |
Oct. 6, 2021, 1:36 p.m. |
Publication |
Integrating_Deep_Reinforcement_Learning_with_Model-based_Path_Planners_for_Automated_Driving.pdf |
March 29, 2022, 8:16 a.m. |
Publication |
Pedestrian_Emergence_Estimation_and_Occlusion-Aware_Risk_Assessment_for_Urban_Autonomous_Driving.pdf |
March 29, 2022, 8:18 a.m. |
Progress Report |
304_Progress_Report_2022-03-30 |
March 29, 2022, 8:20 a.m. |
Publication |
A finite-sampling, operational domain specific, and provably unbiased connected and automated vehicle safety metric |
May 2, 2022, 9:47 a.m. |
Publication |
A Formal Safety Characterization of Advanced Driver Assist Systems in the Car-Following Regime with Scenario-Sampling |
May 2, 2022, 9:48 a.m. |
Publication |
A Formal Characterization of Black-Box System Safety Performance with Scenario Sampling |
May 2, 2022, 9:48 a.m. |
Publication |
Towards Guaranteed Safety Assurance of Automated Driving Systems with Scenario Sampling: An Invariant Set Perspective |
May 2, 2022, 9:49 a.m. |
Publication |
A modeled approach for online adversarial test of operational vehicle safety |
May 2, 2022, 9:49 a.m. |
Publication |
Towards Guaranteed Safety Assurance of Automated Driving Systems with Scenario Sampling: An Invariant Set Perspective (Extended Version) |
May 2, 2022, 9:50 a.m. |
Publication |
Point_Cloud_Registration_with_Object-Centric_Alignment.pdf |
Oct. 3, 2022, 8:51 a.m. |
Progress Report |
304_Progress_Report_2022-09-30 |
Oct. 3, 2022, 9:09 a.m. |
Progress Report |
304_Progress_Report_2023-03-31 |
March 31, 2023, 1:30 p.m. |
Match Sources
No match sources!
Partners
Name |
Type |
The Ohio State University |
Deployment Partner Deployment Partner |
Technical University of Munich |
Deployment Partner Deployment Partner |