Login

Project

#85 Sensor-Based Assessment of the In-Situ Quality of Human-Computer Interaction in the Cars


Principal Investigator
SeungJun Kim
Status
Completed
Start Date
Jan. 1, 2015
End Date
Dec. 31, 2015
Project Type
Research Advanced
Grant Program
MAP-21 TSET - Tier 1 (2012 - 2016)
Grant Cycle
2015 TSET UTC
Visibility
Public

Abstract

Nowadays, technology enables us to interact with information anytime and anywhere, including in the car while driving. Appropriate in-vehicle interaction, however, depends on the current driving situation and the driver state. Intelligent information systems and advanced driver assistance systems help drivers maintain high situational awareness during vehicle operation, yet also increase visual distraction and cognitive load. Nevertheless, existing research inadequately addresses how an intervention of context-sensitive information affects drivers’ attention management and cognitive load, and whether it increases workload and thus hinders attentive/safe driving.

To fill this gap in human-vehicle interaction research, this project explores the design and development of a series of in-vehicle sensing prototypes, experiments with using Dedicated Short-Range Communication(DSRC) as an information stream, examines a broad range of sensor data streams to understand driver/driving states, and develops a model-based driver/driving assessment by using machine learning technology. The long-term goal of this project is to sustain and/or restore safe driving ability by reducing attentional and cognitive workload while delivering relevant information. The near-term goal is to understand driving situations and driver states, particularly when drivers engage in peripheral interactions—actions not related to the primary task of driving—as they indicate appropriate opportunities to interact with information during naturalistic driving.
    
Description
Statement of problem: Human attention is limited and its saturation results in diminished situational awareness, which in turn may result in unsafe driving situations. When drivers engage in peripheral interactions (e.g., texting or changing the radio station), their split attention may reduce their ability to fully process traffic conditions or control the vehicle.
Drivers’ situational awareness can be supported by using advanced driver assistance systems, such as blind spot information systems that assist in parking. Such context-sensitive feedback supports drivers’ in-situ decision making in situations of high uncertainty. However, scant research has explored how context-sensitive information affects drivers’ attention management and cognitive load, or whether it increases workload and thereby hinders attentive/safe driving. 
Sensory feedback delivered by secondary assistance systems alerts drivers to current or impending driving situations; however it also increases visual distraction and taxes cognitive load. Thus, it becomes increasingly important to balance a driver’s situational awareness and in-situ capability, which may also determine the in-situ quality of human-computer interaction in the cars. In this project, we aim to implement a sensor-incorporated test-bed for conducting lab and field experiments inhuman-vehicle interaction. We also aim to build a system that adapts situation-sensitive information based on in-situdriving and cognitive load models for safe navigation. 

Research objectives: The primary research goal in this project is to reduce drivers’ attentional and cognitive workload, while ensuring relevant information is delivered so that drivers can perform desired peripheral interactions. In order to better identify situations in which drivers enter high cognitive load states, we 1) examine a broad range of sensor data streams to understand driver/driving states (e.g., motion capture, peripheral interaction monitoring, psycho-physiological responses, etc.), and 2) present a model-based driver/driving assessment by using machine learning technology.
Specifically, when context-sensitive information is presented (e.g., navigation information), we will investigate the most appropriate timings for the interruption by 1) modeling its perceived value and 2) determining its presentational cost. As a source of incoming information, we plan to simulate Dedicated Short-Range Communication (DSRC) situations that can provide drivers with necessary information, but that risk functioning as an unwanted or unsafe distraction.Data collection will be performed in both simulated and naturalistic driving environments (i.e., in the lab and the field). To investigate the interaction of driver experience with intervention manners, initial test conditions will include 'don't deliver information' vs. 'deliver as information comes in' vs. ‘delay information delivery’. In the proposed work, we will explore the following specific research questions:
- During which driver and driving states, as revealed in sensor data streams, are drivers interruptible and open to dual-task demands (e.g., respond to smartphone push information and/or situational information delivered through a DSRC channel)?
- How do those states interact with task-relevancy of dual-task demands (i.e., relevancy to the primary task) and in-situ ambient driving contexts (e.g., outside traffic situations or in-vehicle driving conditions)?-
- Which set of sensor data features most help detect driver/driving states in the context of the preparation of peripheral interactions the car?
- When are interruption times most appropriate for a secondary information item or a secondary cognitive task that rely on peripheral interaction in the car?
This project is expected to contribute to the field of human-vehicle interaction by 1) creating knowledge and technology that provides appropriate feedback to drivers about their abilities, thus enhancing their self-awareness and enabling them to modify their behaviors accordingly and 2) understanding how automotive interfaces impact driver cognitive capability and developing better methods for knowing how often drivers enter high cognitive load states.

Proposed tasks: This project will involve two main tasks: 
Task1: Sensor design-the goal of this task is to design touch and pressure sensitive sensor prototypes to capture drivers’ peripheral interaction states. The prototypes will be installed as 1) steering wheel covers, 2) driver seat covers, and 3) car instrument panels (e.g., GPS devices, car radios, windshield controls, etc.). The sensors will measure drivers’ hand pose and postures, seating pose and posture, and their interaction states with instrument panels, including touch pressures. In addition, we will capture driver gesture using forearm-worn input devices based on muscle gesture recognition. 
This raw sensor data will be processed using Arduino microprocessors and transmitted to Android tablets via a Bluetooth communication. Two tablets will be installed in the car. One tablet will manage communications between sensors (e.g., receiving data from the wearable input devices) and monitor car motions by using on-board diagnostic sensors; the other tablet will video-record driver’s behaviors and front driving view. Both tablets will work as central computing and communication units for audiovisual data logging, reception, storage, and management.
Task 2: Human-subject experiments-The goal of this task is to create sensor-based models to assess driver/driving states and validate those models in a simulated environment and/or on real roads. In the proposed work, we will build a vehicular simulation test-bed and conduct lab and field experiments by equipping the sensor prototypes developed in Task 1.The team has approved IRBs to conduct human-subjects experiments in either simulated or field driving environments. In particular, HS14-219, entitled “DriveCap: Naturalistic Driver Data”, allows the team to perform data collection by using multiple sensors and our sensor prototypes during naturalistic driving situations.
n the lab experiments, we will simulate a 3D driving environment to generate log data about real-time driving states such as steering angle, pedal controls, and drivers’ response time to simulated traffic incidents. We will collect pilot driving data and build models that predict drivers’ behavioral changes and cognitive load changes in the context of the preparation of upcoming peripheral interactions in the car. 
In the field experiments, we will identify the least intrusive set of sensor devices that can be installed in actual cars, collect field data in naturalistic driving situations, and refine the prediction models obtained in the lab experiments. The primary goal of this task is to validate the practicability of the sensor data prototypes and the performance of the models in the field (i.e., the scalability test and refinement of the prediction models in the naturalistic field conditions).
Our previous UTC project (2014)demonstrated that a large number of sensor features (more than 140) enable a system to monitor driver workload states in near real-time (every 1-second)during naturalistic driving, and we learned that applied features derived from raw sensor data streams(See Attachment 2, Figure 1)contribute to the performance of a driver state detection system more meaningfully than the raw streams themselves; however we have also left longer-term data collections from a few individuals or more personalized features (e.g., route familiarity or habitual behaviors in the car cockpit) to improve the individual model classification accuracy to future work.In the proposed works, we further develop new sensor prototypes, apply more rigorous analytics and machine learning techniques, and assess the in-situ quality of driver experience in DSRC situations.
Timeline
Task 1 - Sensor design 
- Sensor prototypes (20% done, 4.5 months, iterative design; See Attachment2, Figure 2)
- Pilot field data collection + pilot analysis (1 month)
- Sensor log parsing tool (30% done, 1 month)
- Machine-learning based detection and assessment systems (Stage 1, 20% done, 1.5months)

Task 2 -Human-subject experiments
- Sensor data collection+ DSRC simulation test-bed(1 month lab study + 4-month field studies)
- Visual analytic tool (20% done, 1.5month; See Attachment 2, Figure 3)
- Sensor feature tabulating tool + ground truth annotation tool (10% done, 1.5 month)
- Machine-learning based detection and assessment systems (Stage 2, 1.5months)Others
- Use case scenarios creation(1 month, in Mar, 2015)
- Work dissemination events: academic conference, technical workshop (in April and May, 2015)
- Pilot deployment(in August and November, 2015)
Strategic Description / RD&T

    
Deployment Plan
We will deploy sensor prototypes collaboratively with three partners (See the attached letters of support).Two research-oriented institutes, KETI and UNIST, will assist in the development, system validation, and pilot-deployment of sensor prototypes, especially in engineering wearable sensing platforms and embeddable sensing platforms to monitor in-situ driver and driving states. Also, to help us model driving behaviors and driving styles,Cindy Cohen School of Driving, LLC, a local driving school in Pittsburgh, will deploy our IRB-approved experimental apparatuses and collect data during their driving sessions.

We also have a research agreement in progress with a German branch of TAKATA Corporation, one of the world's leading suppliers of advanced automotive safety systems and products, to jointly produce in-car sensing platforms and sensor-based assessment of driver workload. In addition, we are currently discussing potential collaborations about driver experience sampling in connected vehicle environments during naturalistic driving with the corporation’s Pittsburgh branch. 

Note: The research team, led by Dr. Kim and Prof. Dey, is currently partnered with KETI in a three-year international R&D project, newly awarded, on wearable UI/UX assessment. The project scope includes the deployment of wearable sensing/feedback devices for usability tests in automotive user interface and in-vehicle applications, and we will use the project fund for our end as the source of leverage for the proposed work. As part of our goal to generate further collaborations,Prof. Dey plan to meet with KETI this December to discuss future MOU between CMU and the Institute, and a technical workshop between Dr. Kim and Prof.Oakley (UNIST) for field deployment of custom-built sensing platforms. Both will be held in South Korea; the latter event will be scheduled during theACM CHI 2015 conference where the team is expected to present our UTC outcome of this year and meet principal investigators from KETI(http://chi2015.acm.org/).
Expected Outcomes/Impacts
In the proposed works, we intend to 1) develop sensing prototypes, which are wearable for vehicle drivers and/or embedded for vehicles, 2) produce a visual analytic tool customized for time-series sensor data collected from naturalistic driving, 3) present a sensor-based driver state detection system (e.g., detect driver interruptibility in near real-time) and a model-based driving state assessment system (e.g., discriminate driver aggressiveness), 4) generate a set of hypotheses and use case scenarios for driver-centered intelligent information systems that improve in-situ driver experience, and 5) disseminate our work. The performance metrics include:
- Sensor prototypes: data acquisition rate (> 1 sample, per second);sensing accuracy (>95%);data loss rate per individual driver (<10% of driving time); and the proportion of participants who provide full datasets across multiple sensors (<80% of total participants).
- Visual analytic tool: User acceptance rate in a five-point Like rt-scale (> 70% of study participants who have intermediate experience in time-series sensor data collection and analysis; after their performance of a series of classification and visual scan tasks, our tool should be evaluated as more usable formore than 5 visual analytic features, compared to their own tools.)
- Machine-learning based detection and assessment systems: classification accuracy (> 90% for both population and individual models in the detection system, and > 80% accuracy models for both violation-class and questionnaire-class in the assessment system);real-time performance(< every 3-second).
- Hypotheses and use case scenarios: the number of situational factors identically issued from more than 70% of study participants (>10factors);sympathetic response rate for use case scenarios (> 85% of study participants, with respect to more than 5 scenarios for the aspects of ‘value’, ‘cost’, and ‘driver capability’ when an in-car information system pushes proactive information).
- Dissemination of work: publications at premier venues in HCI (e.g., ACMCHI, UIST, Ubicomp) and presentations at technical workshops, invited talks and courses (>10times).
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
anind@cs.cmu.edu Dey, Anind HCII Co-PI Faculty - Research/Systems
noemail1@noemail.com Kim, SeungJun HCII PI Faculty - Research/Systems

Budget

Amount of UTC Funds Awarded
$79927.00
Total Project Budget (from all funding sources)
$79927.00

Documents

Type Name Uploaded
Final Report 68__85_-Sensor_BAsed_Traffic_Signals_H7sCFjG.pdf June 21, 2018, 8:39 a.m.

Match Sources

No match sources!

Partners

No partners!