Abstract
Central to a smart transportation system is access to real-time data especially at points of planned conflict such as intersections. Unfortunately, visual data is too expensive in bandwidth for remote analysis. The proposed research focuses on performing analysis at the edge and only transmitting compact analysis results. Algorithms will be developed to detect and track vehicles, pedestrians, and bicyclists to summarize counts, travel direction, and notable events such as near collisions. Summary results will be submitted to a website for a real-time look of the road environment.
Description
Introduction
More and more city planners are taking into consideration smart mobility solutions to address transportation needs. For example, self-driving cars offer independence for seniors and people with disabilities, greater road safety, cost savings through ride sharing, increased productivity, reduced congestion, and reduced fuel use and carbon emissions. But what type of information and data is needed for city planners to accommodate smart transportation systems?
Valuable information can be extracted from visual data, but it is expensive in bandwidth and time to transfer to a computer for processing and interpretation. The proposed research is focused on developing algorithms that can analyze visual data at the edge and share the results in real-time. Intersections are of particular interest because they are bottlenecks for efficient travel. As part of this research, we will develop an edge-computing platform, deploy the platform to an intersection, and build a web-based interface for displaying the real time analytics.
Edge-Computing Platform
The platform will automatically capture, process, and interpret visual data, which will be wirelessly transmitted. It will consist of hardware protected by a weatherproof enclosure and a software architecture for processing visual data and transmitting results. The primate hardware components are a CPU, GPU, ethernet switch, edge router, and remote power switch. This platform permits high-performance computing, ingesting large amounts of data from cameras, and remote control of the system. We expect to install at least 4 cameras for an intersection.
Algorithms and Analytics
We have experience training deep learning models to detect and classify these objects. Crucial for computing critical analytics is tracking objects to observe temporal behavior. This is challenging due to obstructions caused by objects on the road. Capturing data from multiple cameras assists in providing different viewpoints, but corresponding images from all of the cameras needs to be computed. We will develop methods for identifying corresponding objects from multiple, unsynchronized cameras. Tracking objects will permit observing the path that each object traverses throughout the intersection. This will allow us compute statistics on how vehicles travel. Novel methods will be developed to identify and predict anomalous behavior such as near collisions, unexpected entry into the roadway, jaywalking, etc.
Analytics Reporting Platform
Information computed by the edge-computing platform will be displayed in an easy-to-visualize web-based platform. We will build this platform using a privately controlled server on Carnegie Mellon University’s campus. The platform will show live video feeds, as well as all of the analytics over time. Users with permitted access will be able to navigate the temporal data to view analytics within specific time frames.
The Team
The proposed applied research will be conducted by a team of experts from computer vision, imaging and optics, robotics, machine learning, and systems development with strong experiences in deploying transportation related applications.
Carnegie Mellon University, and the City of Pittsburgh provide a perfect ecosystem for this research. Key personnel have been part of the University Transportation Center (UTC) for Technologies for Safe and Efficient Transportation (TSET), where they conduct basic and applied research in safety and scene understanding and a National Science Foundation award on Cyberphysical Systems with a Technology Transfer to Practice (TTP) option. Metro21 supports numerous applied research projects at Carnegie Mellon University and will serve as a liason with the City of Pittsburgh and other deployment stakeholders. This proposal fulfills a critical need in bringing together basic and applied research efforts in a principled manner to design, develop, and deploy a system for more efficient transportation systems.
Timeline
• July 1, 2020 – August 31, 2020: Begin process for intersection instrumentation. Design and build enclosure for computing equipment. Stress test equipment.
• September 1, 2020 – September 30, 2020: Instrument intersection with camera and computing equipment enclosure.
• October 1, 2020 – October 31, 2020: Begin initial data collection for algorithm development.
• November 1, 2020 – January 31, 2021: Continue algorithm development, build front-end and back-end of website.
• February 1, 2021 – February 29, 2021: Meet with stakeholders to present preliminary results and obtain feedback.
• March 1, 2021 – May 31, 2021: Revise methods based on feedback.
• June 1, 2021 – June 30, 2021: Final Report
Strategic Description / RD&T
Deployment Plan
We have industrial collaborations with Bosch, Intel, and NVidia. We have experience instrumenting intersections based on past deployments through projects with Heinz Endowments, Metro21, and Mobility21. Past work was deployed with the support of The City of Pittsburgh Department of Mobility Infrastructure (DOMI). DOMI will support the deployment of the equipment on City property when needed (see Supplemental Information for email of support).
Expected Outcomes/Impacts
By the conclusion of this award, at least one system will be deployed at another intersection in the City of Pittsburgh. The system will serve as a test bed for collecting and processing data for use by researchers. Novel algorithms will be developed and published along with experimental results. Algorithms will be thoroughly tested and analyzed with real data collected at the installation site.
Datasets will be collected in a variety of weather and lighting conditions. These unique datasets will be published on-line for use by researchers in order to advance the fields of computer vision, artificial intelligence, etc.
Results will be benchmarked on a concrete list of key stakeholder identified mission-critical tasks. Comprehensive testing will be performed in multiple lighting and weather conditions. The following will be delivered: 1) semi-annual reports, 2) final program report, 3) designs of the physical system, 4) databases of collected anonymized visual data, 5) experimental findings, 6) documentation on construction, installation, and usage of system components, and 7) site visit by interested stakeholders.
Expected Outputs
TRID
Individuals Involved
Email |
Name |
Affiliation |
Role |
Position |
dnarapur@andrew.cmu.edu |
Narapureddy, Dinesh |
Carnegie Mellon University |
Other |
Student - PhD |
srinivas@cs.cmu.edu |
Narasimhan, Srinivasa |
Carnegie Mellon University |
PI |
Faculty - Tenured |
rtamburo@cmu.edu |
Tamburo, Robert |
Carnegie Mellon University |
Co-PI |
Faculty - Research/Systems |
tzhi@cs.cmu.edu |
Zhi, Tiancheng |
Carnegie Mellon University |
Other |
Student - PhD |
Budget
Amount of UTC Funds Awarded
$97867.00
Total Project Budget (from all funding sources)
$97867.00
Documents
Type |
Name |
Uploaded |
Data Management Plan |
dmp_intersections2019.pdf |
Jan. 3, 2020, 1:58 p.m. |
Project Brief |
summary_slides.pptx |
March 17, 2020, 7:32 a.m. |
Publication |
Traffic4d: single view reconstruction of repititious activity using longitudinal self-supervision |
March 30, 2021, 10:26 a.m. |
Presentation |
Visual Understanding of Traffic at Intersections: Access to real-time information |
Oct. 1, 2020, 10:31 a.m. |
Progress Report |
335_Progress_Report_2020-09-30 |
Oct. 1, 2020, 10:41 a.m. |
Progress Report |
335_Progress_Report_2021-03-31 |
March 30, 2021, 10:52 a.m. |
Publication |
Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision |
July 12, 2021, 11:33 a.m. |
Final Report |
Final_Report_-_335.pdf |
Aug. 1, 2021, 9:53 a.m. |
Publication |
Occlusion-net: 2d/3d occluded keypoint localization using graph networks |
April 6, 2022, 5:07 a.m. |
Publication |
Deconvolving Diffraction for Fast Imaging of Sparse Scenes |
April 6, 2022, 5:08 a.m. |
Publication |
Exploiting & Refining Depth Distributions with Triangulation Light Curtains |
April 6, 2022, 5:08 a.m. |
Match Sources
No match sources!
Partners
Name |
Type |
City of Pittsburgh Department of Mobility and Infrastructure |
Deployment Partner Deployment Partner |