Login

Project

#100 Efficient 3D Accident Scene Reconstruction


Principal Investigator
Luis E. Navarro-Serment
Status
Completed
Start Date
Jan. 1, 2015
End Date
Dec. 31, 2015
Project Type
Research Advanced
Grant Program
MAP-21 TSET National (2013 - 2018)
Grant Cycle
2015 TSET UTC - National
Visibility
Public

Abstract

In this project we leverage recent developments in the field of computer vision to develop a system capable of reconstructing scenes in 3D using low-cost cameras, such as the ones embedded in cell phones and tablets. Techniques currently available are capable of taking a group of unordered images as input, and producing a 3D reconstruction of the scene as output. We will use these techniques to develop a practical and efficient system for 3D accident scene reconstructions. An app will assist the user in taking a complete and adequate set of images. After the data acquisition the images are analyzed and a high fidelity model of the accident scene is reconstructed. Using such a system is much more time efficient and cost effective than current methods that use laser scanners. The time saved will significantly reduce delays and congestion. Improving the quality of accident investigations will give better insight into the causes of accidents and thereby inform strategies to improve safety.    
Description
IntroductionIn this project we plan to advance work we have done in the area of 3D reconstruction of traffic accident scenes using conventional cameras1. A sample reconstruction from our previous work is shown in Figure 1. Our goal is to provide accident investigators with an affordable and effective system that will allow them to collect a complete set of pictures from a scene in a short amount of time and then reconstruct a high fidelity 3D model offline.

We anticipate that our tool will have a significant impact in reducing the time and complexity involved in documenting road accidents. More importantly, because emergency  personnel  are vulnerable while working close to traffic, any  improvement to  response  and remediation times reduces their exposure to danger, effectively reducing risk for responders2.  Additionally, shorter remediation times reduce traffic delays, congestion and secondary accidents: It has been estimated that one minute of full highway closure can cause up to one mile of congestion3. Improving the quality of accident investigations will give better insight into the causes of accidents and thereby inform strategies to improve safety.

MotivationCurrently investigators use laser scanners to obtain geometrically accurate 3D reconstructions of accident scenes. The typical cost of these scanners is in the tens of thousands of dollars, even surpassing a hundred thousand dollars for some models. This makes it prohibitively expensive to equip each response team with a device, instead there is one device for one geographic area and it needs to be specially requested. It might take a full hour till the device arrives at the accident scene. Such a long delay is unacceptable for most accident cases; a complete documentation of an accident is only conducted when fatalities have occurred.   

Computer vision techniques offer solutions to produce 3D models, which merge a collection of images of the object or scene captured from different locations. An example of a system built using some of these techniques is presented in Figure 1. Several advantages make the use of cameras as primary sensors for 3D scene reconstruction worthy of serious consideration. First, the cost of a camera is several orders of magnitude lower than the cost of a laser scanner. For example, the cost of a 3D scanner used by some law enforcement agencies can reach up to $130,000, while the digital camera used for our example costs less than $500. Second, the current state-of-the-art in embedded image processing technology has provided point-and-shoot cameras that allow users to capture high quality images under the most demanding circumstances, with a minimum of expertise in photography. Third, cameras also tend to be robust and small, which makes them ideal as portable devices, in particular for use in cluttered or confined spaces. Finally, with their integration into cell phones and tablets, cameras have become an everyday item with which most people, including accident scene investigators, are already familiar, either as a tool at work or for recreational purposes.  

As part of two other projects1 we developed a method to take pictures of an accident scene and construct a 3D model (Figure 1). The method uses open source structure-from-motion software (Figure 2) and our own scripts and procedures. For the proposed project we want to make this method practical from efficiently taking the pictures at the scene to achieving a polished end result.

Approach
There are three main tasks we want to accomplish. First we want to study the work practices of investigators. Second we need to assist the user in taking the data. Finally we need to ensure that the Allegheny County Emergency Services analysis pipeline is easy to use and the result is in a format the inspectors can use. 

Study work practices
We have already established relationships with several interested parties. Among them are the Allegheny County Emergency Services (see LOT), the Pennsylvania Turnpike Commission (PTC) and several accident inspectors. In fact, the PTC invited us to an accident reconstruction workshop where we met many inspectors and witnessed staged crashes (see Figure 1). We will use these contacts to study the work practices and get feedback on our system.

User assistant
At the core of this development of an assistive image capture application, to guide users in the collection of photographic evidence. The  input  images  must  satisfy  a  set  of  requirements  for  a  scene  to  be  reconstructed  properly.  E.g. each area needs to be photographed at least three times from different camera positions. We want to develop an Android application to help investigators documenting the scene with the task of collecting the images that will be used to reconstruct the scene. We plan to leverage work done recently in our lab which produced an exploratory smartphone application.  The application displays a live camera preview, and its user interface displays vertical crosshairs to ease alignment. Figure 3 presents a pair of sample screenshots with the device aligned (right) and unaligned (left). The camera continuously operates in auto-focus mode so that the images are not blurry. In the final version the app will instruct the photographer which picture to take next so that all the requirements are satisfied, e.g. photographing each area at least three times. 

Analysis pipeline
The software and scripts we have are research code, i.e. it is written to test ideas and concepts, but it is not optimized for efficiency, stability and ease of use. We want to package the code so that it can be used by someone with some technical experience. The output should also be compatible with the investigative tools the inspectors are already using. Finally, we will evaluate and specify the main parameters so that the user can make informed analysis decisions, e.g. tradeoffs between speed and accuracy.

Conclusion
After the completion of the tasks we will have a system for 3D reconstruction of accident scenes that is ready to be tested in the field. Our team has the experience in research as well as systems design and development, and our laboratory has the necessary infrastructure to ensure the success of this project. 
Timeline
The work is planned over a 12-month time frame. The timetable is as follows:

Jan-Apr: Study Work Practices
Mar-Aug: Develop User Assist App
May-Oct: Package Code for Practical Use
Oct-Dec: Evaluation of System
Nov-Dec: Develop Deployment Plan
Strategic Description / RD&T

    
Deployment Plan
By the end of the project the system will be ready to be tested in the field. We have several agencies (ACES and PTC) and individual inspectors that have indicated interest in using this system and willingness to test it. We anticipate that we can do a pilot test with them after this project. We are also working with Near Earth Autonomy (a CMU startup company) who wants to commercialize this system1.  
Expected Outcomes/Impacts
Key Expected Results:
• Detailed quantitative evaluation of reconstruction. 
• Prototype system for assisted image capture, practical analysis pipeline, and results compatible with current tools.
• Prototype system read to be field tested.

We plan to focus our evaluation on two properties of scene reconstruction: geometric accuracy and completeness4. Geometric accuracy measures how close the reconstructed model R is to the ground truth model G(meters). Similarly, completeness measures how much of G is modeled by R (percentage).  
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
lenscmu@ri.cmu.edu Navarro-Serment , Luis E. Robotics Institute PI Faculty - Research/Systems

Budget

Amount of UTC Funds Awarded
$99551.00
Total Project Budget (from all funding sources)
$100278.00

Documents

Type Name Uploaded
Final Report 100_finalReport.pdf Aug. 17, 2018, 4:03 a.m.

Match Sources

No match sources!

Partners

No partners!