Login

Project

#32 Drivable Space Estimation Using Surround View Camera Systems


Principal Investigator
Vijayakumar Bhagavatula
Status
Completed
Start Date
Jan. 1, 2016
End Date
Dec. 31, 2016
Project Type
Research Advanced
Grant Program
Private Funding
Grant Cycle
Old Projects
Visibility
Public

Abstract

Seeing and being seen are the fundamental requisites for transportation safety. The fast development of computer vision algorithms in recent years has led to an increasing variety of vision-based traffic scene understanding problems, including pedestrian detection, vehicle detection, lane marker detection, road segmentation and vanishing point estimation. Despite their seemingly diverse forms, many of these problems turn out to share similar underlying purposes: knowing where a vehicle can or cannot go based on visual inputs. This naturally motivates us to tackle the transportation safety computer vision problem from a more generalized perspective: Estimating drivable space with computer vision and machine learning based methods. In the proposed research, we seek to address the drivable space estimation problem by exploring discriminative information (e.g., lane markers, road borders, vehicles and other objects) from the visual inputs. We believe that a surround-view camera system can significantly enhance traffic safety by providing complete information of the surroundings. Compared with many prevailing single camera systems, a surround-view system covers 360 degrees with 4 synchronized cameras, providing much wider views without blind zones. This leads to better sensing of the surrounding environment. Currently, we have setup a surround-view system on a Volkswagen Tiguan SUV. The system consists of four 180 degree fisheye wide-view cameras, each of which covers one side of the vehicle. As illustrated the figure to the right, two side-view cameras are placed under the wing mirrors, while the other two are front-view and rear-view cameras. A single video is then recorded where each frame contains the synchronized 4 views. To illustrate the power of advanced computer vision algorithms with surround view sensor, we propose to develop and demonstrate three applications of relevance to increasing transportation safety: Detecting highway borders, detecting drivable regions and obstacle warning at lower speeds.    
Description
Motivation
Safety has, and will continue to be one of the core issues of transportation. Seeing and being seen are the fundamental requisites for transportation safety. The fast development of computer vision research in recent years has led to an increasing variety of vision-based traffic scene understanding problems, including pedestrian detection, vehicle detection, lane marker detection, road segmentation and vanishing point estimation (Examples illustrated in Fig. 1). Despite their seemingly diverse forms, many of these problems turn out to share similar underlying purposes: knowing where a vehicle can or cannot go based on visual inputs. This naturally motivates us to tackle the transportation safety computer vision problem from a more generalized perspective: Estimating drivable space with computer vision and machine learning based methods. Estimating drivable space is the core target of sensing in both advanced driver assistance and autonomous driving systems, where subsequent operations pretty much depend on the output of the estimation.  In the figure below, images from left to right correspond to pedestrian detection, vehicle detection, lane marker detection, road segmentation and vanishing point estimation.

Technical Approach
In the proposed research, we will first seek to address the drivable space estimation problem by exploring discriminative information from the visual inputs. This is mainly motivated by two facts: 1. Discriminative regions often correspond to important components which define the main structure of a road. Human naturally pay more attention to discriminative information. As a result, important road components such as border and lane marker are deliberately designed to capture visual attention. An example is shown in the figure below to illustrate the difference between discriminative and non-discriminative regions.  In this highway scenario, the green circle indicates a region which attracts less attention when driving, while red circles indicate regions attracting more visual attention.

In addition to exploring discriminative information, we will design methods that can explore the structural information of a traffic scene. The majority of transportation scenarios, such as urban streets and highways, contain highly structured scene layouts. For example, road borders and lane markers are often continuous, while drivable collision-free road surfaces are often smooth and continuous. Utilizing such so called “structural” information significantly helps to improve the prediction performance, and therefore improve the estimation accuracy.

The Surround-View System
The range of sensors is a critical issue in designing a reliable drivable space estimation system. The structure of vehicles and the biological nature of human visual system cause a driver’s field of view to be limited inside a vehicle. Many traffic accidents are caused by inadequate visibility, ranging from backup crashes on children and pedestrians, to lane merging accidents caused by ignoring vehicles in blind zone. Vehicle camera systems are known to be effective complements to such disadvantage. We believe that a surround-view camera system can significantly enhance traffic safety by providing complete information of the surroundings. Compared with many prevailing single camera systems, a surround-view system covers 360 degrees with 4 synchronized cameras, providing much wider views without blind zones. This leads to better sensing of the surrounding environment. Currently, we have setup a surround-view system on a Volkswagen Tiguan SUV. The system consists of four 180 degree fisheye wide-view cameras, each of which covers one side of the vehicle. As illustrated the figure to the right, two side-view cameras are placed under the wing mirrors, while the other two are front-view and rear-view cameras. A single video is then recorded where each frame contains the synchronized 4 views. The system basically covers the surrounding areas of the vehicle with almost no blind zones.

Applications 
To illustrate the power of combining advanced computer vision algorithms with surround camera sensor, we will focus on the following applications of relevance to increasing transportation safety.

Application I: Detecting Highway Borders
Detecting the border of a highway is important since it can provide important road structure cues which greatly support the localization in autonomous driving systems. In autonomous driving, localization is a critical task to determine the exact position of a vehicle. Knowing where the road border is can significantly reduce the possibility of vehicle driving off the road and causing an accident. To address the problem, we start from the most basic setting where a side-view camera looks out to the right side of a vehicle. The detection problem is approached as the problem of detecting the physical road borders (e.g., guard rails, concrete barriers, etc.). We will assume that the border is on the right side of the car, but we can easily apply our approach to other three views available from our surround view system. In addition, detections from multiple views can jointly support and improve each other. 

Application II: Detecting Drivable Regions / Highway Shoulders
Besides detecting the physical road border, we are also interested in detecting the drivable regions and highway shoulders on the side of the vehicle. In the United States, highways often contain a so-called ``shoulder region'' usually defined as the region between the outer-most solid lane marking and the physical road border. This shoulder region serves as a buffer zone before the physical limit of the road. By knowing where the highway road shoulder is, a vehicle can go into driveable spaces on road shoulders in case the road ahead has obstacles. 

Application III: Obstacle Warning at Low Speed
Reduced driver attention presents another major cause of accidents. Even with adequate sensing information, drivers can still fail to get informed and make decisions accordingly. Making such mistakes is almost inevitable for human as it can be caused by many factors, including fatigue, alcohol, texting and a variety of other distractions. In such a case, a system which warns drivers in time will likely to significantly reduce the chances of accidents.

A smart ADAS will be particularly effective in reducing chances of accidents when comprehensive sensing is required but drivers are not paying adequate attention. These situations will cover many frequent scenarios such as vehicle parking, backing, turning and lane merging. Instead of addressing this problem by building detectors to directly detect vehicles and pedestrians in a traditional way, we propose to consider the problem in a dual way by treating all obstacles higher than the ground surface as a negative class, and directly detecting the drivable collision-free ground surface.


Timeline
0-3 months: Developing the algorithms for drivable space estimation. 
4-6 months: Developing algorithm for detecting highway borders.
7-9 months: Developing algorithm for detecting highway shoulders.
10-12 months: Developing algorithm for obstacle warning at low speed.
Strategic Description / RD&T

    
Deployment Plan
none
Expected Outcomes/Impacts
•	Algorithm to detect highway borders.
•	Algorithm to detect highway shoulders.
•	Algorithm for obstacle warning at low speed
•	Demonstration of the developed algorithms on real surround view camera video
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
kumar@ece.cmu.edu Bhagavatula, Vijayakumar ECE PI Faculty - Tenured

Budget

Amount of UTC Funds Awarded
$55219.00
Total Project Budget (from all funding sources)
$55219.00

Documents

Type Name Uploaded
Final Report UTC_32.pdf Dec. 10, 2018, 4:25 a.m.

Match Sources

No match sources!

Partners

No partners!