Login

Project

#217 A Video Analytics Infrastructure Platform for Connected Vehicles and Transportation Planning


Principal Investigator
Srinivasa Narasimhan
Status
Completed
Start Date
Aug. 1, 2018
End Date
June 30, 2020
Project Type
Research Applied
Grant Program
FAST Act - Mobility National (2016 - 2022)
Grant Cycle
2018 Traffic21
Visibility
Public

Abstract

Self-driving cars are the future of mobility. Once considered science fiction and fantasy, self-driving cars offer independence for seniors and people with disabilities, greater road safety, cost savings through ride sharing, increased productivity, reduced congestion, and reduced fuel use and carbon emissions. The technology enabling self-driving cars is rapidly improving. However, to get the best performance from self-driving cars, especially in busy urban areas, they need to be connected to a smarter roadway infrastructure.

The proposed research is focused on instrumenting busy intersections with image sensors and edge computing with capabilities to share timely information about the road environment. Visual data contains extremely valuable information, but is expensive in bandwidth and time to transfer to a computer for processing and interpretation. Cameras located high above the road in the infrastructure, for example, on utility or traffic signal poles provide a broad view of the road free of occlusions captured by sensors on vehicles at ground level.

The infrastructure platform will automatically capture, process and interpret visual data, which can be wirelessly transmitted to connected vehicles for use by driver assist an autonomous systems or to work stations for real-time observation. In the first year of the proposed research, one intersection will be instrumented and algorithms will be developed to improve mobility at intersections. They include detecting vehicles and performing 3D reconstruction, which can be used for path planning and vehicle navigation, and detecting and counting notable events such as near collisions, which can be useful for traffic planning. 

In the second year, the platform will be installed in three additional intersections. Algorithms will be developed for detecting and tracking pedestrians and bicyclists, detecting and counting notable events such as unexpected pedestrian or bicyclist roadway entry, and capturing detailed statistics of vehicle and human behavior, e.g., how often are pedestrians jaywalking?
    
Description
The proposed work addresses improving mobility by the development and deployment of video analytics at the edge to relay critical information via infrastructure-to-vehicle (I2V) communication to vehicles for path planning and navigation. As illustrated in Figure 1, up to 8 cameras (1) will be mounted above the intersection on traffic signal poles to acquire overlapping views of the entire intersection. Images will be transferred to a video analytics compute node (2) that will be mounted to a utility pole where vehicles and pedestrians will be detected and tracked with proposed methods. Information will then be wirelessly transmitted to vehicles (3) in the vicinity of the intersection. 

Human error causes more than 90% of crashes, so it is not surprising to see why there has been so much excitement surrounding the development of advanced driver-assist systems and autonomous driving vehicles. The groundwork for autonomous vehicles started over three decades ago at Carnegie Mellon University’s NavLab. Since that time, the Uber and Carnegie Mellon University/General Motors vehicles are just two of many examples of autonomous systems being developed in industry and academia.  Advanced driver-assist systems for lane departure and collision warning are already available in some consumer vehicles and will rapidly penetrate the automobile market. Autonomous driving systems, however, require much more work to have a rich understanding of the driving environment to observe and anticipate anomalies. Because humans and sensors of autonomous systems are located at the ground level, they have a limited view of the road environment due to occlusions especially at busy intersections. If these systems had an unimpeded, broad view of the intersection, their path planning and navigation could be greatly enhanced. Cameras located high above the road in the infrastructure, for example, on utility or traffic signal poles provide a broad view of the intersection. Thus, the first goal of this proposal is to design and develop a  weatherproof video analytics system consisting of a networked edge computer and multiple cameras. One system will be deployed in the first year and three additional systems will be deployed in the second year.

Transmitting images or video wirelessly from infrastructure-to-vehicle (I2V) is bandwidth limited and would be too slow to be of use in a high-speed environment. Through the field of computer vision, large amounts of visual data can be mined to extract useful information for understanding the world. This information is just a small fraction of the imaging data. Advanced algorithms have been developed in the areas of object detection, recognition, tracking and 3D reconstruction, have been applied to the problem of road scene understanding for pedestrian and vehicle detection and avoidance. The detection and tracking of objects, can yield valuable information such as object type (vehicle type, pedestrian, bicyclist, etc), speed, density, trajectory, and anomalies (accidents, large debris, etc). These data can be provided by the I2V system to vehicles approaching the intersection for direct use by a human driver or autonomous system or fused with data generated by approaching vehicles. The second goal of this proposal is to develop algorithms for detecting and tracking vehicles, as well as predicting their trajectories. Additionally, events such as near collisions and statistics on vehicle behavior will be captured for traffic planning purposes. The third goal (2nd year) of this proposal focuses on developing algorithms for detecting and tracking pedestrians and bicylcists. 

Multiple cameras with overlapping fields of view enable 3D reconstruction of the scene. Traditional methods require that the cameras are temporally synchronized and calibration. Synchronization requires additional cumbersome cables during installation or precise software control of the cameras. Camera calibration is a tedious and disruptive process. Since the cameras will be in a dynamic outdoor environment calibration will likely need to be repeated on a regular basis.  Thus, the fourth goal of this proposal is to develop methods for 3D reconstruction from unsynchronized and uncalibrated cameras.

The proposed applied research will be conducted by a team of experts from computer vision, imaging and optics, robotics, machine learning, and systems development with strong experiences in deploying transportation related applications.  Carnegie Mellon University, NREC, and the City of Pittsburgh provide a perfect ecosystem for this research. Carnegie Mellon University and NREC has had a long tradition in autonomous navigation of robots and vehicles since the early 80s and has strongly influenced current technology. Key personnel have been part of the University Transportation Center (UTC) for Technologies for Safe and Efficient Transportation (TSET), where they conduct basic and applied research in safety and scene understanding and a National Science Foundation award on Cyberphysical Systems with a Technology Transfer to Practice (TTP) option. NREC specializes in developing proof-of-concept prototypes from basic research for deployment. Metro21 supports numerous applied research projects at Carnegie Mellon University and will serve as a liason with the City of Pittsburgh and other deployment stakeholders. This proposal fulfills a critical need in bringing together the results from those basic research efforts in a principled manner to design, develop, and deploy an I2V system for improving intersection safety. The fifth goal of this proposal is to deploy and test four systems with eight cameras at each of four busy intersections in the City of Pittsburgh.

The proposed research brings cutting edge video analytics to the future of transportation technology with an emphasis on improving intersection safety. As V2X technology and assistive driving systems (including autonomous vehicles), begin to come to fruition, this research will be primed for broad proliferation. Key milestones for the project include bi-annual reports including progress on system development and experimental reports. The PIs will benchmark the results of the research on a concrete list of key stakeholder identified mission-critical tasks. Comprehensive testing of the computational methods and system design will be performed in all lighting and weather conditions. Qualitative and quantitative results will be compared to similar methods. At the conclusion of the research program (2-years), the following will be delivered: 1) bi-annual reports, 2) final program report, 3) designs of the physical system, 4) databases of collected images, 5) experimental findings, 6) conference and/or journal publications, 7) outreach to community via conference presentations, 8) documentation on construction, installation, and usage of system components, and 9) site visit by interested stakeholders.

To ensure success, we will work closely with the City of Pittsburgh to ensure that our system will be deployed as a  I2V testbed for providing situational awareness at signalized intersections. Bosch will provide some camera equipment and technical support and Comcast will aid in providing internet connectivity through the ongoing Smart City project with Carnegie Mellon University. The planned scope of the proposed work includes transitioning and adapting basic research currently underway for deployment with support by the City of Pittsburgh.
 
Timeline
Year 1
Mo 01	Mo 02	Mo 03	Mo 04	Mo 05	Mo 06	Mo 07	Mo 08	Mo 09	Mo 10	Mo 11	Mo 12
Design and Build Hardware Platform									
			Instrument Intersection						
Develop Software and Algorithms						
Develop Data Privacy and Management Policy						
					System Revision					
					Refine Software and Algorithms (Site Testing)
					Data Collection
										Build Website
										Documentation and Final Report


Year 2
Mo 01	Mo 02	Mo 03	Mo 04	Mo 05	Mo 06	Mo 07	Mo 08	Mo 09	Mo 10	Mo 11	Mo 12
Build Three Hardware Platforms									
			Instrument Three Intersections						
Develop Algorithms						
					System Revision					
					Refine Algorithms (Site Testing)
					Data Collection
										Documentation and Final Report

Strategic Description / RD&T

    
Deployment Plan
We already have collaborations with Traffic 21 (UTC TSET), NSF funded CPS project partners in RI and ECE (Hebert, Mertz), collaborations with industry (Bosch, Comcast, Intel, NVidia), and endorsements for collaborations with Pittsburgh city and local partners like GASP and CMU CREATE Labs to study environmental effects (air quality) of traffic and industrial sources. Metro21, Heinz Endowments, National Science Foundation, Office of Naval Research

The City of Pittsburgh Department of Mobility Infrastructure will support the deployment of our platform on City infrastructure. In addition to providing broadband internet service, Comcast will assist in the deployment and maintenance of our platform. Support will also come from members of the Intel Center for Visual Computing where faculty members are performing research relevant to large scale video processing.

The funding from Mobility21 will help put together a larger and longer-term effort in solving the mobility and safety problems facing America now and in the future.
Expected Outcomes/Impacts
By the conclusion of this award, at 4 systems will be deployed at intersections in the City of Pittsburgh. The system will serve as a test bed for collecting and processing data for use by researchers. Novel algorithms will be developed and published along with experimental results. Algorithms will be thoroughly tested and analyzed with real data collected at the installation site. 

A set of policy guidelines will be developed for responsible capture, use, and retention of data. This is a valuable deliverable that will protect the privacy of citizens and could apply to future developments. Guidelines will be developed and refined for the full duration. Datasets will be collected in a variety of weather and lighting conditions. These unique datasets will be published on-line for use by researchers in order to advance the fields of computer vision, artificial intelligence, etc. 

Results will be benchmarked on a concrete list of key stakeholder identified mission-critical tasks. Comprehensive testing will be performed in multiple lighting and weather conditions. The following will be delivered: 1) semi-annual reports, 2) final program report, 3) designs of the physical system, 4) databases of collected images, 5) experimental findings, 6) conference and/or journal publications, 7) outreach to community via conference presentations, 8) documentation on construction, installation, and usage of system components, and 9) site visit by interested stakeholders.
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
srinivas@cs.cmu.edu Narasimhan, Srinivasa Robotics Institute PI Faculty - Tenured
rtamburo@cmu.edu Tamburo, Robert Robotics Institute Co-PI Faculty - Adjunct

Budget

Amount of UTC Funds Awarded
$124559.00
Total Project Budget (from all funding sources)
$124559.00

Documents

Type Name Uploaded
Data Management Plan DMP.pdf Sept. 25, 2018, 5:28 p.m.
Progress Report 217_Progress_Report_2019-03-30 March 29, 2019, 5:30 p.m.
Progress Report 217_Progress_Report_2019-09-30 Sept. 26, 2019, 11:33 a.m.
Progress Report 217_Progress_Report_2020-03-30 March 16, 2020, 11:54 a.m.
Final Report final_report_o12adRg.pdf July 14, 2020, 6:12 a.m.
Publication RVN-CVPR18.pdf June 30, 2020, 3:15 p.m.
Publication RVN-CVPR19.pdf June 30, 2020, 3:17 p.m.
Publication Active Perception using Light Curtains for Autonomous Driving. Nov. 28, 2020, 10:27 a.m.

Match Sources

No match sources!

Partners

Name Type
Comcast Deployment Partner Deployment Partner
Intel Deployment Partner Deployment Partner
Bosch Deployment Partner Deployment Partner