#304 Image Processing Approaches to Traffic Situation Understanding, Risk Assessment, and Safety

Principal Investigator
Keith Redmill
Start Date
March 1, 2019
End Date
June 30, 2022
Research Type
Grant Type
Grant Program
FAST Act - Mobility National (2016 - 2022)
Grant Cycle
Mobility21 - The Ohio State University


This project will explore several potential applications of image processing, including NN/deep learning technologies, to the analysis of traffic scenes involving passenger and transit vehicles.  We outline three potential applications below- the exact distribution of effort and topics addressed will depend on the availability of student and research staff.
1.	Investigating the Safety and Robustness of End-to-End Vision-Based Automated Driving Systems
2.	Methodologies for extracting information from transit vehicle video for traffic flow estimation
3.	Optical flow for automated vehicle lane following    
1.	A. Investigating the safety and robustness of end-to-end vision-based automated driving systems
Increasing the mobility of people and goods via automated transportation requires robust and safe intelligent vehicle systems. Computer vision is one field that can create new frontiers towards this end. The exteroceptive capabilities of vision-based scene perception algorithms is approaching human-level performance. However, mapping the image space to the vehicle control space can still be considered an open problem. The more recent, end-to-end Deep Reinforcement Learning (DRL) based automated driving algorithms have shown promising results for mapping image pixels to vehicle control signals. However, pure learning-based approaches lack the hard-coded safety measures of model-based counterparts. We propose a hybrid approach for integrating a model-based path planning pipe into a vision based DRL framework to alleviate the shortcomings of both worlds. Our overall research objectives are to develop novel algorithms to integrate planning into deep reinforcement learning frameworks. We will identify the variables needed for our methodologies and develop, evaluate, and over time improve algorithms to increase the robustness of learning-based autonomous driving systems.

2.	Methodologies for extracting information from transit vehicle video for traffic flow estimation
Transit vehicles cover a nontrivial fraction of the road network as they follow their assigned routes and have the potential to act as traffic probes providing observations distributed over significant periods of time and space.  Many transit systems have installed cameras and video capture systems that, as a byproduct of their primary monitoring and safety functions, also capture traffic scenes and the presence and motion of vehicles, bicycles, and pedestrians surrounding the transit vehicle.  We will explore working with image processing research collaborators at CMU and OSU to share video collected from the OSU TTM CABS campus transit system, identify the variables needed for our traffic flow estimation methodologies, and develop, evaluate, and over time improve algorithms and methodologies to automatically process CABS video data.  These results will be evaluated and compared with results generated using our existing techniques.  Possibilities for expansion into new traffic variables, for example extracting vehicle speeds which are currently extracted from cost prohibitive LIDAR sensors, will be explored.   We expect the developments and applications of this activity to allow us to deliver useful information to a stakeholder (OSU TTM) and to demonstrate the potential for large scale implementation.

3.	Optical flow for automated vehicle lane following
We will examine the application of optical flow with sliding mode control for vehicle navigation and compare traditional methods of optical flow extraction to NN/Deep Learning approaches.  This will initially be done in Carla, a driving simulator with virtual reality environment capabilities, and may later be extended to the Transportation Research Center testing facility.  Topics of interest include optical flow and focus of expansion, road and road boundary potential field design, and gradient tracking sliding mode controller for lateral and longitudinal control.
Deployment Plan
Expected Accomplishments and Metrics

Individuals Involved

Email Name Affiliation Role Position
hillstrom.7@osu.edu Hillstrom, Stacy The Ohio State University Other Other
redmill.1@osu.edu Redmill, Keith The Ohio State University PI Other
yurtsever.2@osu.edu Yurtsever, Ekim The Ohio State University Co-PI Faculty - Researcher/Post-Doc


Amount of UTC Funds Awarded
Total Project Budget (from all funding sources)


Type Name Uploaded
Data Management Plan dmp-Redmill-2019.docx May 1, 2019, 1:57 p.m.
Project Brief Redmill_et_a_2019_slides.pptx May 10, 2019, 8:07 a.m.
Progress Report 304_Progress_Report_2019-09-30 Oct. 1, 2019, 7:52 a.m.
Publication Visual potential field based control for autonomous navigation in unseen environment March 31, 2020, 2:57 p.m.
Publication OpticalFlow_IV2020_final.pdf Sept. 30, 2020, 4 p.m.
Publication Risky_Action_Recognition_In_Lane_Change_Video_Clips_Using_Deep_Spatiotemporal_Networks_With_Segmentation_Mask_Transfers.pdf March 31, 2020, 3:06 p.m.
Progress Report 304_Progress_Report_2020-03-31 March 31, 2020, 3:07 p.m.
Publication 2007.15820.pdf Sept. 30, 2020, 4:04 p.m.
Publication IV2020_0252_FI3.pdf Sept. 30, 2020, 4:08 p.m.
Publication TII-20-3544_Proof_hi.pdf Sept. 30, 2020, 4:07 p.m.
Progress Report 304_Progress_Report_2020-09-30 Sept. 30, 2020, 4:30 p.m.

Match Sources

No match sources!


Name Type
The Ohio State University 1