Login

Project

#68 Sensor-based Assessment of the In-Situ Quality of Human-Computer Interaction in Cars.


Principal Investigator
SeungJun Kim
Status
Completed
Start Date
Jan. 1, 2016
End Date
Dec. 31, 2016
Project Type
Research Advanced
Grant Program
MAP-21 TSET National (2013 - 2018)
Grant Cycle
2016 TSET UTC
Visibility
Public

Abstract

Today’s technologies enable us to interact with information spaces ubiquitously – anytime and anywhere, including in cars while driving. These technologies deliver information proactively and allow drivers to maintain high situation awareness; however, the same technologies interrupt attention and cognition, and thereby increase workload and potentially hinder safe driving (e.g., push notifications may distract drivers who are looking at side mirrors). Nevertheless, existing research inadequately addresses how timing of information interventions and their presentation modes interact with the in-situ values and costs of the HCI experience in cars. To fill this gap, this project creates enabling technologies to coordinate contextual information with sensor-detected interruptible moments to adapt to drivers’ in-situ capabilities in attention and cognition. This project also presents a driver-centered workload manager that mediates interruptions and maintains a high quality HCI experience for drivers. The long-term goal of this project is to sustain and/or restore safe driving by reducing attentional and cognitive workload while delivering relevant information to drivers. The near-term goal is to refine our key technology, i.e., sensor-based detection of driver interruptibility, to create driver-experience assessment models based on information about real-time mechanisms that interact with drivers’ perceived value of presented information. This refined technology will underlie our new intelligent in-car workload manager. The project activities include the design of in-vehicle cyber-physical systems including a range of sensing and feedback prototypes; the development of visual analytics and machine-learning tools to assess driver interruptibility, drivers’ behavioral routines, and the quality of driver experience; and the conducting of human-subject experiments in naturalistic field driving situations as well as in a connected vehicle test bed (e.g., using Dedicated Short-Range Communication as an information stream).    
Description
Statement of problem: As cyber space gets smarter, physical spaces have become more (internet-) connected. In fact, today’s technologies enable us to interact with information ubiquitously – virtually anywhere at any time – even before we request it. For example, push notifications or GPS route guidance information is delivered based on the current time and location of smartphone users. However, these technologies can interrupt our attention and cognition. In particular, interruptions while driving increase driver workload and can reduce performance on the primary driving task and therefore be quite dangerous. Thus, accurately identifying when a driver can be interrupted (i.e., attention can be safely split to include the proactively intervened information with minimal increase in driver workload) is critical for building intelligent in-car information systems. In our previous UTC project, we successfully developed the technology of near real-time detection of driver interruptibility based on a range of sensor data streams. In the proposed work, we aim to apply this key technology to improve the intelligence of in-car information systems. 

Key technology: In our most recent UTC project (2015), we collected sensor data from 25 drivers during naturalistic driving – an average of 1.25 hours of driving per driver. In total, we extracted 152 sensor features (OBD: 72, accelerometer sensors: 40, physiological sensor: 40) and 5 manually annotated features related to traffic from videos. We used instances of drivers engaged in peripheral interactions as moments of ground truth for their split attention while managing interruptions. As a result, we demonstrated that sensor data could build a machine learning classifier to determine driver interruptibility every second with 94% accuracy. We also successfully identified sensor features that best explained the states where drivers performed peripheral interactions, which contributed to the high performance of our system. Based on our findings, we propose a classifier that could be used to build in-vehicle cyber-physical systems that mediate when drivers use technology to self-interrupt and when drivers are interrupted externally by technology
Research objectives: This project aims to improve the intelligence of in-vehicle technologies in connected environments. We will examine a broad range of time-series data streams collected from sensors worn by drivers and/or embedded in vehicles. Based on this data, we will estimate drivers’ attention and cognition in near real-time and in-situ behavioral patterns in cars. We expect to present a model-based intelligent intervention of in-vehicle information by using machine-learning technology to improve the in-situ quality of human-computer interaction of drivers. Proposed work: In Year 1, we propose to refine our key technology to create sensor-based models that retain information about the real-time mechanisms whereby drivers’ perceived value of the presented information interacts with the nature of the information (Work 1) and the attributes of the sensor-detected interruptible moments (Work 2). In Year 2, we will develop a workload manager that mediates interruptions (Work 3), thereby increasing driver appreciation of the quality of presented information. We plan to embed the workload manager in in-car sensory augmentation systems to validate its usability. The scope of the proposed works is as follows: ? Work 1: Build a sophisticated model of driver interruptibility to factor the value of push information. The model will comprise features from sensor data streams; therefore, the new model will incorporate quantified information of the mechanisms whereby intervened information interrupts drivers with varying degrees of impact. This model will enable systems to weigh the value of intervened information against the cost to driver attention. For example, if the information would command a driver’s visual engagement but not require an immediate response, it would be better to have our system hold the information until such time when the interruption would not interfere with driving ability, as detected by sensors. In Work 1, we will collect sensor data from participants in a field driving session, during which we will push interruptions of varying relevance. Specifically, we will factor the information according to ? temporal urgency (e.g., high vs. low, such as alerts of upcoming route hazards vs. local advertisements), ? relevance to the driving task (e.g., high vs. low, such as windshield wipers vs. car radio), ? overall importance (e.g., high vs. low, such as low fuel vs. oil change), and/or ? expected benefit (e.g., cognitive aid vs. memory aid vs. auxiliary knowledge support). ? Work 2: Create a model of driver experience of interruptions associated with the attributes of sensor-determined opportune moments. Whereas our previous UTC work (2015) sought to identify interruptible moments in real-time and explored the feasibility of a sensor-based approach, it did not consider pushed information or actual interruptions. Further, although our sensors accurately detected opportune moments for driver interruptions, this did not always imply that drivers would appreciate an interruption, regardless of its informative value. In Work 2, we hypothesize that driver acceptance will interact with how and in which contexts they perceive/receive the sensor-suggested opportune moments. Our previous work only briefly touched on the practical aspect of using sensors to determine the ideal duration of interruptible moments. Work 2 addresses this issue in greater detail, and may significantly change interruption designs (e.g., adapting push notifications according to the expected duration of sensor-predicted opportune moments in order to improve driver experience of the interruption). We will build the first version of the model by pushing interruptions at different timings to assess the real-world impact of being able to detect interruptible moments while driving; then, we will refine the model to incorporate the information of the mechanisms whereby the attributes of opportune moments interact with driver experience. Specifically, we plan to explore the following attributes of opportune moments: ? expected duration (e.g., 5 vs. 30 seconds), ? driver and driving states in sensor-suggested opportune moments (e.g., on a highway vs. at a red light, at 10km/h vs. 80km/h, on curved vs. straight roads, at driver’s high- vs. low-variant heart rate), and/or ? adjustable latency (e.g., interruptions occurring at 1-sec vs. 3-sec latencies from the time the opportune moment has been detected). ? Work 3: Build a workload manager that regulates the flow of information to drivers and the specific sensory feedback to limit interference with driving. We will apply our key technology and new models from Work 1 and 2 to coordinate ? the level of detail of route guidance information to drivers (e.g., simple vs. complex) and ? the proportion of sensory feedback modes (e.g., 70% visual/ 30% auditory feedback vs. 10% visual/90% auditory feedback vs. 100% haptic feedback only). We propose to design the workload manger to perform coordination tasks at interruptible timings as detected by sensors. In our previous UTC work, we presented two prototypes of sensory augmentation systems for in-vehicle use: a head-up display using augmented reality technology and a vibro-tactile steering wheel using haptic technology (See Attachment 2, Figure 3a). We demonstrated that both systems successfully augment driver ability to perceive route guidance information during simulated driving
tasks. In this work, we learned that individual differences in cognition and/or behavior can significantly reverse the effects of the presented intervention. In addition, the results of the intervention can be influenced by factors such as the flow of pushed information, the modality of the information, the driver’s in situ capability, and the driving situation. In Work 3, we plan to embed the workload manger to interoperate with our sensory augmentation systems, thereby coordinating the delivery of external interruptions while driving in real time in sensor-detected, opportune moments. We will evaluate the embedded systems in a simulated driving environment by using our hybrid assessment method (i.e., task performance, self-reporting, eye-tracking, and physiological measurements). We expect this work to reveal the effects of autonomous adaptation of information intervention upon the in-situ driver experience. Human-subjects experiments: For the proposed work, we will conduct four human-subject experiments in the two-year project period: three field experiments (Work 1, Work 2, and the Work 3 workload manager in coordination tasks) and one lab experiment (the Work 3 workload manager with sensory augmentation systems). The project team has four approved IRBs to conduct human-subject experiments: ? HS15?244 to collect sensor data through multiple wireless and wearable sensors during naturalistic driving situations, ? HS15-406 to apply machine learning and visual analytic tools to assess driving quality, ? HS15-465 to deploy the driver situation-awareness aid with sensor cursors in simulated cars, and ? HS15-397 to deploy visual, audio, and haptic presentation systems in driving situations. Expected contributions: This project is expected to contribute to the fields of Human-Vehicle Interaction and Internet of Things in automotive domains. The scope of impacts is as follows: 1) This project initiates the development of an intelligent system that incorporates our interruptibility assessment technology in automotive domains; 2) The proposed workload manager will be modularized to be compatible with commercial wearable devices (e.g., smartphones, smart wristband or watches) that drivers may already own, and with existing Wi-Fi-based Internet of Things sensors (e.g., motion detection); and 3) Project outcomes will be scalable to other application domains where computational aids support users’ in-situ decision making in situations of high uncertainty and at the potential cost of additional perception or cognition
Timeline
Year 1 ? In-car Android platform 2.0 with integrated sensing and feedback modules for Work 1-3 (50% completed, 2 months; See Attachment 2, Figure 1) ? Database of route guidance and local landmark information for Work 1-3 (10% completed, 1.5 months), and then test condition designs for Work 1and 2 (1.5 months) ? Human-subjects experiment for Work 1 (6 months through May): Pilot experiment ? Main data collection (field) ? Data analysis and model construction ? Analytic tools 2.0 for sensor log parsing, sensor feature tabulating, ground truth annotation, driving video review, visual analytic, machine learning, etc. for Work 1-3 (60% completed, 2 months; See Attachment 2, Figure 2) ? Human-subjects experiment for Work 2 (5.5 months through November): Pilot experiment ? Main data collection (field) ? Data analysis and model construction
Strategic Description / RD&T

    
Deployment Plan
We will deploy a sensing and feedback test-bed collaboratively with five partners (See the attached letters of support). The test-bed in this project period will include new feedback presentation modules as well as sensing modules interoperating with a single Android platform. Two research-oriented institutes, KETI and UNIST, will continuously assist in the development, validation, and pilot-deployment of sensing prototypes. TAKATA Corporation and Hyundai Motor Group (Human Factors and Devices Research Team, R&D Division) will assist in testing the feasibility of a larger-scale deployment of our systems in their automotive experiment environments. The Cindy Cohen School of Driving, LLC (Pittsburgh, PA), will help us to collect sensor data from their student-drivers during driving sessions (IRB-approved). 

Note: The research team, led by Dr. Kim and Dr. Dey, is currently partnered with KETI in a three-year international R&D project (2015 - 2017) to deploy wearable sensing/feedback devices for user interface and user experience assessment. We plan to use these funds to match the proposed UTC funding. In addition, we are currently discussing sponsorship by Hyundai Motor Group to collaboratively develop a working version of driver interruptibility models. We are also discussing collaboration with TAKATA Corporation about driver-state estimation by using machine learning/analytics (e.g., driver workload assessment).
Expected Outcomes/Impacts
In the proposed work, we intend to 1) develop a workload manager that is embedded in an Android platform in cars and interoperable with sensing prototypes, feedback presentation modules, and an information database, 2) produce new and/or upgraded time-series analytic tools for visual inspection, behavioral routine detection, and human annotation, 3) develop a driver experience sampling system, 4) generate a set of experiment conditions (e.g., brief vs. detailed visual information), and 5) disseminate our work. The performance metrics will include: ? Workload manager: Instances of video-recorded driver activities (> 20 cases per driver); sensor data acquisition rate (> 1 sample per second); feedback presentation or information intervention rate (> 1 instance per two-minutes of driving); modality switching latency (< 1 second); and the proportion of participants who provide full range of sensor datasets (> 80% of participants). ? Time-series analytic tools: User acceptance rate of upgraded tools in a five-point Likert scale (> 4.0 points in more than five analytic features from >70% of study participants who have intermediate experience in time-series sensor data analysis); and user input error rate with new analytic systems (<2%). ? Driver experience (i.e., ground truth) sampling system: experience sampling rate (> 10 samples per ten-minutes of driving), model accuracy (> 90% detection rate of driver interruptibility); sympathetic response rate in the quality of driver experience (90% of study participants with regard to perceived values, cognitive costs, and self-reported interruptibility); and real-time performance (at least every 2 seconds). ? Experiment conditions: number of driving sessions (> two driving sessions for a train- and a test-set per driver); number of test conditions (> 2 conditions per driving session; e.g., low vs. high level of detail of navigational information to drivers); and success rate of level discrimination (> 90% of participants who rate the levels of each information attribute and the proportions of feedback modes). ? Dissemination of work: publications at premier venues in HCI (e.g., ACM CHI, IUI, Ubicomp) and presentations at technical workshops, invited talks and courses (> 10 times per year)
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
anind@cs.cmu.edu Dey, Anind HCII Co-PI Faculty - Tenured
noemail1@noemail.com Kim, SeungJun HCII PI Faculty - Untenured, Tenure Track

Budget

Amount of UTC Funds Awarded
$80000.00
Total Project Budget (from all funding sources)
$80000.00

Documents

Type Name Uploaded
Final Report 68__85_-Sensor_BAsed_Traffic_Signals.pdf June 21, 2018, 8:39 a.m.
Publication Integrated driving aware system in the real-world: Sensing, computing and feedback April 19, 2021, 7:04 a.m.
Publication Exploring the value of information delivered to drivers April 19, 2021, 7:05 a.m.

Match Sources

No match sources!

Partners

No partners!