From laptops to smartphones, ubiquitous computing devices equipped with sensors generate information about our daily routines on the go. This information enables computers to estimate our in-situ states and proactively provide information services to users. Consequently, users can interact with cyber information anywhere, at anytime, including in vehicles while driving. However, arbitrary or inopportune prompts to interact with cyber information interfere with safe vehicle operation. As human attention is finite, the ubiquity of HCI opportunities comes at a cost, more significantly during driving. To address this problem, this project explores the issues of when to intervene (i.e., optimal timing), how to intervene (i.e., presentation methods), and what to intervene (i.e., types of HCI demands or information). We aim to enable sensor-equipped computers to handle these issues and designhuman-centered intelligent interventions. This project extends our prior UTC work to create enabling technologies that support seamless interaction between human and ubiquitous computing spaces. In this project, we collect big sensor data streams from a least intrusive set of wearable or internet-of-things sensors, worn by vehicle users and/or embedded in vehicles, including daily smart devices. During a set of human-subject experiments in naturalistic field driving situations, we will investigate how drivers interact with proactive adjustments of information initiated by system intelligence rather than user demand. We also consider presentation methods and types of interaction schemes acrosshuman visual, auditory, and haptic sensory channels. The near goal is to create a smarter, contextually intelligent cyber-physical system that supports intelligibility of system behavior. These experiments will provide a set of sensor-based real-time models of drivers’ cognitive load, user interruptibility, and user experience of proactive information services. Ultimately, these technologies will help drivers safely interact with proactive cyber-information interventions.
Name | Affiliation | Role | Position | |
---|---|---|---|---|
anind@cs.cmu.edu | Dey, Anind | HCII | PI | Faculty - Research/Systems |
noemail1@noemail.com | Kim, SeungJun | HCII | Co-PI | Faculty - Research/Systems |
Type | Name | Uploaded |
---|---|---|
Publication | Leveraging Human Routine Models to Detect and Generate Human Behaviors | April 24, 2017, 7:13 p.m. |
Publication | Making Machine Learning Applications for Time-Series Sensor Data Graphical and Interactive. | April 24, 2017, 7:13 p.m. |
Publication | Usability evaluation of smart watch UI/UX using experience sampling method | April 24, 2017, 7:13 p.m. |
Publication | Multi-modal Interruptions in a Context Based Framework | April 24, 2017, 7:13 p.m. |
Presentation | Improving User Experience in Ubiquitous HCI Situations - Sensors Know When to, How to, and What to Interact with Human | April 24, 2017, 7:13 p.m. |
Presentation | Improving User Experience in Ubiquitous HCI Situations - Sensors Know When to, How to, and What to Interact with Human | April 24, 2017, 7:13 p.m. |
Progress Report | 24_Progress_Report_2017-09-30 | Oct. 2, 2017, 11:28 a.m. |
Publication | Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback | March 21, 2021, 4:47 p.m. |
No match sources!
No partners!