Login

Project

#103 Evaluation of Road Monitoring System


Principal Investigator
Christoph Mertz
Status
Completed
Start Date
Sept. 1, 2017
End Date
Feb. 1, 2018
Project Type
Research Applied
Grant Program
Private Funding
Grant Cycle
2017 Traffic21
Visibility
Public

Abstract

The Navlab group at Carnegie Mellon University has developed a smartphone based system to assess pavement and traffic signs. This system is much more cost effective than traditional methods. Navlab has with the help of the city of Pittsburgh collected image and sensor data from large parts of the City of Pittsburgh (“Pittsburgh smartphone data”). Additionally, the City of Pittsburgh recently contracted the company Cartegraph to inspect all their roads using specialized sensors on a dedicated vehicle (“Cartegraph data”).  Navlab is seeking funding to evaluate their smartphone based system and compare it with the results from the specialized vehicle system. In the following sections we will describe the current system, discuss how we want to do the evaluation, and present a 1/2-year plan.
Existing system
 The data collection for the system is done by a smartphone that is mounted on the windshield of a vehicle and is powered by cigarette lighter (Figure 1 left). While the vehicle is driving the smartphone collects images or videos of the outside and tags them with time, GPS, and other selected information.

Figure 1 Left: Smartphone mounted inside a vehicle and powered by the cigarette lighter. Left-middle: Example of road image displayed on Google Earth. The small yellow arrows on the street are markers pointing in the driving direction. When clicking on the marker the corresponding image appears. Right-middle: A typical road image. Right: Classification result. The color indicates the severity of distress: blue = no cracks, light to dark red = light to severe cracks.
One of the key ideas behind the collection system is that it can be easily mounted on any vehicle, especially those that drive on the roads on a regular basis, e.g. garbage trucks drive through every neighborhood once a week. It is therefore possible to collect data frequently without the need for a dedicated vehicle or a dedicated driver. 
The images can be displayed in the asset management system of the maintenance department or with free software. An example is shown in Figure 1 (left middle) where the data is displayed on Google Earth. This will allow the user to inspect the road from a computer instead of physically going to the road. 
The images can also be analyzed automatically. In Figure 1 (right middle) is a typical image of a road with cracks and in Figure 1 (right) the areas with cracks are detected where the intensity of red indicates the severity of the cracks. A distress score can now be calculated, it is the ratio of the area with and without cracks in front of the vehicle. We calculated this score for a set of roads in the city of Pittsburgh and the scores are shown in Figure 2 (left).

Figure 2 Left: Distress score overlaid on a GSI street map of Pittsburgh. Green, yellow, and red are low, medium, and high number of cracks, respectively. Right: Map of the data collected in Pittsburgh. Yellow, light blue, and dark blue indicate the time of day it was collected: day, dawn/dusk, and night respectively.  
Since that analysis was done we have made improvements to the code and the way we are labeling the training data.  Figure 2 (right) shows an overview of the data that has been collected with the help of the City of Pittsburgh. We want to analyze this new data with the improved code and evaluate the results. As ground truth we will use the inspection that was done with the specialized vehicle system. That data was collected in Spring/Summer of 2016 and the outside company will soon provide the results to Pittsburgh. Pittsburgh agreed to share these results with CMU. 
Disclosure: The technology has been licensed to RoadBotics, the PI Christoph Mertz is co-founder and part-owner of this spinoff company. RoadBotics will not be involved in this project.   
    
Description
The CMU contact person communicating and meeting with the City of Pittsburgh regarding this work will be Christoph Mertz. The general relationship between the City of Pittsburgh and CMU will be managed by UTC/Traffic21 leadership, the contact person is Courtney Ehrlichman who will be cc’ed on all the CMU-City communications. 

 Task 1: Analysis of Pittsburgh smartphone data (This task will be done by CMU) 
CMU will analyze the Pittsburgh smartphone data with the improved texture detection code. It will detect the amount of cracked road visible in the image. CMU will spot check the results manually and test it for consistency. The consistency test will be done by comparing the results from stretches of road that have been driven multiple times. Whenever necessary CMU will adjust parameters and increase training to get the best results. The results are damage scores for Pittsburgh’s streets. CMU will also iterate with Subtask 3b. Whenever CMU finds problems with the damage scores that can be improved on during Subtask 3b, CMU will adjust parameters, retrain, etc. 

Task 2: Exchange of Cartegraph data (This task will be done jointly by City of Pittsburgh and CMU – proposed, needs confirmation from City of Pittsburgh) 
The City of Pittsburgh will exchange the Cartegraph data with CMU. CMU will meet with [TBD contact person] to receive the data in electronic form. The [TBD contact person] will explain the data format and give any other pertinent information that is needed for CMU to understand and work with the Cartegraph data. 

Task 3: Data comparison (This task will be done by CMU)
CMU will compare the damage scores from the Pittsburgh smartphone data with the Cartegraph inspection results. The Cartegraph data will be used as ground truth in this evaluation. 
Subtask 3a: Data alignment:  For the comparison to be meaningful CMU has to ensure that both use the same measurement units. E.g. if the Cartegraph data scores road segments and the Pittsburgh smartphone data scores individual locations, then the Pittsburgh smartphone data needs to be averaged for each road segment to get one score per segment. CMU also needs to consider that the two data sets have not been taken at the exact same time. CMU needs to develop methods to extrapolate the Pittsburgh smartphone data scores to the time the Cartegraph data was taken.
Subtask 3b: Quantifying difference: After the data alignment the Cartegraph data will be used as ground truth to evaluate the scores from the Pittsburgh smartphone data. The evaluation measures are standard deviation of the differences and list of error cases. CMU will iterate with Task 1 to improve the crack detection method. After the iterations CMU will do a final evaluation with a subset of Cartegraph data that was set aside and not part of the iterations. This way the final evaluation will be unbiased.  

Task 4: Regular and final meetings (This task will be done jointly by City of Pittsburgh and CMU – proposed, needs confirmation from City of Pittsburgh)
The City of Pittsburgh and CMU will have regular meetings over the phone or in person [suggestion: once a month] and one in person final meeting. The regular meetings will be with [TBD City Pittsburgh contact person] during which CMU will report on the progress of the work and ask questions about the Cartegraph data and other relevant issues. The final meeting will be a presentation by CMU to all interested people at the City of Pittsburgh. If the City of Pittsburgh likes to have a copy of the final report and the final data base with the crack scores CMU can provide an electronic copy during one of the meetings.  

Task 5: Final report and GIS data base (This task will be done by CMU)
CMU will summarize the findings in a final report. It will contain a description of the data collection method, data pipeline, the texture detection algorithm, and the road damage score. It will give an overview of the Cartegraph data. It will explain how the Pittsburgh smartphone data road scores were aligned and compared to the Cartegraph scores. The evaluation results will be displayed in tables and graphs. It will list lessons learned and finish with a conclusion and outlook. CMU will create a final GIS data base which displays image data and the road damage scores similar to the data shown in Figure 2.

Deliverables (to the Traffic21 sponsor): Final report and GIS data base.  
Timeline
6 months
Strategic Description / RD&T

    
Deployment Plan
Deliverables (to the Traffic21 sponsor): Final report and GIS data base.  
Expected Outcomes/Impacts

    
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
cmertz@andrew.cmu.edu Mertz, Christoph Carnegie Mellon University PI Faculty - Research/Systems

Budget

Amount of UTC Funds Awarded
$30000.00
Total Project Budget (from all funding sources)
$30000.00

Documents

Type Name Uploaded
Publication Evaluation_of_Road_Pavement_Monitoring_System_final.pdf March 12, 2018, 5:42 a.m.
Presentation Road_Monitoring_evaluation_10172017.pdf March 12, 2018, 5:42 a.m.
Progress Report 103_Progress_Report_2018-03-31 March 12, 2018, 5:44 a.m.
Final Report 103_-_Evaluation_of_Road_Monitoring_System_103_final.pdf July 9, 2018, 11:04 a.m.
Publication A method of objects classification for intelligent vehicles based on number of projected points.(*Christoph Mertz) Nov. 27, 2020, 6:27 p.m.
Publication Driver-less Vision: Learning to See the Way Cars Do(*Christoph Mertz) Nov. 27, 2020, 6:28 p.m.
Publication LIDAR and Monocular Camera Fusion: On-road Depth Completion for Autonomous Driving Nov. 27, 2020, 6:32 p.m.

Match Sources

No match sources!

Partners

No partners!