Login

Project

#398 Automatic Detection and Understanding of Roadworks


Principal Investigator
Srinivasa Narasimhan
Status
Completed
Start Date
July 1, 2022
End Date
June 30, 2023
Project Type
Research Applied
Grant Program
FAST Act - Mobility National (2016 - 2022)
Grant Cycle
2022 Mobility21 UTC
Visibility
Public

Abstract

Roadwork zones present a serious impediment to vehicular mobility. Whether new construction or maintenance is taking place, work in road environments cause lower vehicle speeds, congestion, increased risk of rear-end collisions, and more difficult maneuvering. Crowd-sourced navigation systems like Waze warn drivers of roadworks, but those data must be manually entered causing a distraction for the driver. Google maps now automatically shows roadworks, but those data are often slow to update and do not distinguish between active/inactive work zones or specify lane restrictions/changes. In the proposed work, we seek to address these issues by developing computer vision and machine learning methods that will automatically identify and understand (e.g., lane closed and two lanes merge into one lane) road work zones. The calculated information can be shared with other drives and also enable dynamic route planning for navigation systems, driver assist systems, and self-driving cars for efficiently and safely maneuvering through or around road work zones. Moreover, a comprehensive view of road work activity in a region can be constructed from information shared by users. Such a view may prove to be a useful tool for optimizing traffic flow along detour routes (e.g., traffic lights stay green for longer to accommodate the additional volume).    
Description
Identifying roadworks from cameras (visual data), as well as data from other sensors, is an extremely challenging problem since work sites are heterogeneous in appearance (Figure 1, supp.). No two roadwork sites (construction or maintenance) look alike. Because of this, driver-assist and self-driving systems have difficulty with navigation within these zones. For example, lane markers are not changed during lane shifts, which may cause lane keep assist systems to steer towards barriers or other objects. In this example, automatic recognition of a roadwork zone on approach could provide a warning to the driver and disable lane keep assist. To our knowledge, there is little work being done in this area. Our approach is to detect objects commonly located within roadwork sites and, based on their location relative to surfaces (e.g., roads, sidewalks, bike lanes, etc) and other heuristics, determine whether roadwork is present. 

Objects of Interest: As previously mentioned, we are interested in detecting as many objects as possible that are commonly associated with roadworks (Figure 2, supp.). Objects include devices such as cones and barricades, signage, vehicles, workers, etc. In addition to detecting roadworks objects, it is critical to detect surfaces to provide context for the detect objects. 

Dataset Creation: There are not any know datasets that provide the needed information about roadworks. Therefore, a novel, comprehensive dataset will be created. The dataset will include exemplar images, descriptive tags for various conditions (e.g., weather, time, location, etc.) and manually segmented objects of interest. Visual data will be obtained from several data sources, including cameras in the Greater Pittsburgh area, cameras mounted on vehicles, and live video streams shared on the internet. Existing annotation software will be used to perform the labeling and segmentation. An example of an image with roadworks and the results of manually segmenting objects is shown in Figure 3, supp.

Detection and Understanding of Roadworks: The illustration in Figure 4, supp. Depicts the steps for automatically detecting and understanding roadwork zones. First, data from Visual Data Sources are manually labeled and segmented. These Training Data will be used to train deep learning models for objects commonly associated roadwork zones. Algorithms will perform Object Detection and Surface Detection (e.g., roads, sidewalks, etc) and use Heuristics to determine whether roadworks are being performed, lane obstructions exist, and estimate traffic density. Once the models are trained and the algorithms are developed, the training data can be used to Evaluate performance since ground-truth labels and segmentations exist. Once the system achieves high accuracy and precision, it will be ready for Deployment. Images from Visual Data Sources will be inputs directly into the Algorithms to compute the various Outputs.

Partners: We have established partnerships via previous UTC projects with City of Pittsburgh's Department of Mobility and Infrastructure, Shaler, and Monessen. Those partnerships permitted us to deploy cameras for data collection. We have an established partnership with NavLab to use their Jeep for data collection whenever need. We also have a partnership with Christoph Mertz on a mutual NSF grant, which will permit sharing data collected from passenger buses.
Timeline
•	July 1, 2022: Collect relevant image data containing roadwork objects and scenes
•	August 1, 2022: setup annotation tool for labeling and segmenting visual data
•	September 1, 2022: Label and segment visual data
•	November 1, 2022: Algorithm development.
•	February 1, 2023: Test, re-test, and revise algorithms.
•	March 1, 2023: Present results to stakeholders. 
•	April 1, 2023: Build project website.
•	June 1, 2023 – June 30, 2023: Present final report
Strategic Description / RD&T

    
Deployment Plan
We have or will have camera deployments through past projects from Heinz Endowments, Metro21, and Mobility21. These camera deployments are in the City of Pittsburgh, Shaler, and Monessen. It is unlikely that there will be much roadwork in these areas when needed, so we will seek out roadwork using the NavLab Jeep and collect data using a smartphone camera. Christoph Mertz will also give us access to camera data that he is collecting from a passenger bus for his BusEdge project. We will also actively seek out online video feeds. Once the algorithms are developed, we will deploy and test them on the NavLab Jeep and Christoph Mertz’s BusEdge project. The algorithm will also be deployed on our cluster of computers that currently processes data from 100 cameras.
Expected Outcomes/Impacts
By the conclusion of the award period, a comprehensive dataset of roadwork zones will have been created. The dataset will consist of thousands of images, labels, and segmented objects, which can be used for training and evaluating machine learning models. Images will be captured from a variety of viewpoints to allow for generality when training machine learning models. The unique dataset will be published on-line for use by researchers in order to advance work in smart cities, intelligent transportation systems, computer vision, artificial intelligence, etc. All novel algorithms that are developed will be thoroughly tested and analyzed with real data collected from all data sources. Algorithms will be benchmarked against any published algorithms that accomplish similar tasks. Algorithms and experimental results will be submitted for publication at top tier conferences. The following will be delivered: 1) semi-annual reports, 2) final project report, 3) dataset of anonymized visual data and labeled data, 4) experimental findings, 5) project website, and 6) prepared paper for conference submission.
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
dnarapur@andrew.cmu.edu Narapureddy, Dinesh Carnegie Mellon University Other Student - PhD
srinivas@cs.cmu.edu Narasimhan, Srinivasa Carnegie Mellon University PI Faculty - Tenured
rtamburo@cmu.edu Tamburo, Robert Carnegie Mellon University Co-PI Other
kvuong@andrew.cmu.edu Vuong, Khiem Carnegie Mellon University Other Student - Masters

Budget

Amount of UTC Funds Awarded
$99945.00
Total Project Budget (from all funding sources)
$99945.00

Documents

Type Name Uploaded
Data Management Plan road_work_zones_dmp.pdf Nov. 18, 2021, 5:51 p.m.
Project Brief project_brief.pptx Dec. 1, 2022, 8:57 a.m.
Publication Quantifying the Benefits of Dynamic Partial Reconfiguration for Embedded Vision Applications April 10, 2023, 8:57 p.m.
Publication Dynamic 3D Reconstruction of Vehicles for Safer Intersections April 10, 2023, 8:58 p.m.
Publication Learned Two-Plane Perspective Prior based Image Resampling for Efficient Object Detection April 10, 2023, 8:58 p.m.
Publication Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits April 10, 2023, 8:59 p.m.
Publication High resolution diffuse optical tomography using short range indirect imaging April 10, 2023, 9 p.m.
Publication Programmable light curtains April 10, 2023, 9 p.m.
Publication Method for epipolar time of flight imaging April 10, 2023, 9:01 p.m.
Publication Agile depth sensing using triangulation light curtains April 10, 2023, 9:02 p.m.
Publication Semantically supervised appearance decomposition for virtual staging from a single panorama April 10, 2023, 9:02 p.m.
Progress Report 398_Progress_Report_2023-03-30 April 11, 2023, 9:25 p.m.
Final Report Project_398_Final_Report.pdf April 11, 2024, 11:37 a.m.

Match Sources

No match sources!

Partners

No partners!