Abstract
We are developing an virtual reality Driver Training system with augmented reality pass through where a 16 year-olds or patients with compromised cognitive capabilities can sit in a stationary real vehicle and use mixed-reality to learn driving skills and get exposed to increasingly challenging driving scenarios. Sensors are strapped onto the steering and brake/gas pedals of their car and capture driver movements, that are fed back into the simulation. The windshield and windows are overlaid with a VR generated driving simulator scenario. As a result, the driver is sitting in a real vehicle with AR passthrough showing the steering wheel and the driver’s hands hands, but the risky driving scenarios are simulated. The goal of this system is to develop a simulator that can be retrofitted in any car so novice drivers can train at home in a safe way and experience risky scenarios that cannot be demonstrated in real-life.
Mixed-reality or XR means VR with AR passthrough. So some visual elements are VR and others are camera passthroughs of reality.
The Problem: The high costs of elder care, both to the individual and the government, combined with the demographic shift towards an increasing number of older adults as a percentage of the overall US population is creating a major healthcare crisis. The number of senior citizens in the US in 2030 will be twice that of 2000, leading to a shortage of working-age caregivers and putting increased pressure on labor costs. Equally important is maintaining, or preferably ameliorating, the quality of life of a growing elderly population. Maintaining elders’ autonomy is correlated with increasing quality of life and autonomy enhancement is correlated with improving functionality. Driving is typically a symbol of autonomy. The revocation of driving privileges is often the first step taken by families worried about cognitive decline and emerging dementia of the older adult. Dementia including Alzheimer's disease is a chronic, progressive syndrome that is characterized by a reduction in the ability to perform daily activities, e.g. a cognitive decline with increasing unpredictability and psychological symptoms. Dementia affects about 5 million people in the USA and 35 million worldwide. Coincidentally, Autonomous Vehicles (AVs) are a game-changing AI and robotic solution that can enable older people to maintain independence. For this technology to be effectively deployed, Safety and Trust are however key. Older people, but also caregivers and clinicians need to view the technology as safe and trustworthy. To realize this potential, a robust shared autonomy strategy is needed. The term shared autonomy is an oxymoron, but it embodies the tension observed as caregivers, clinicians, and patients negotiate the need to trust the autonomous system and the desire to stay in control. This research project aims to address the question on how to mediate autonomy between participating actors to allow the human control of the system up to their level of performance and autotune the degree of intervention by the machine to maintain safety.
Approach: To these ends, we propose the development of an interactive imitation learning system for safe human autonomous systems. The system is trained by an expert for multiple levels of performance following a curriculum. When the system is deployed with a human non-expert user (e.g. an older driver), the safe by construction neural network controller ensures safety at any level of performance (see figure on the left). This enables the system to personalize its capabilities to suit the human partner while ensuring safety from any mismatches in the expectations of the controller and that of the human user. The AVs will therefore learn how the user desires to share autonomy and ensure the system does not reach an unsafe state under all operating conditions and inputs from the human.
Description
Timeline
Strategic Description / RD&T
This project has four phases of R&D -
1. More efficient Training with XR:
How can we use the XR training system to develop an adaptive curriculum that effectively improves the driver competence. How do we integrate the real controls of the car (steering and pedals) with virtual ones (gear shift, windshield wipers) to provide a valid Mixed Reality experience.
2. Create a Driver Skills Profile:
Can we concurrently identify regions of skill deficiencies to create a customized feedback report for the novice driver? Can the training system suggest scenarios that will improve the novice driver's performance?
3. Tunable Shared Autonomy:
We then develop an autonomous driving (AD) capability based on imitation learning from experts. How can we tune this AD capability to appropriately intervene for different driver profiles?
For example, if a driver can drive competently at performance levels P1 and P2 but cannot drive safely at performance level P3, which has more aggressive/risky driving, then the shared autonomy system should intervene in P3 driving scenarios. This allows the driver to enjoy human-autonomy at levels P1 and P2, but maintain safety at more aggressive P3 driving scenarios.
4. Dynamic Mixed Reality
Mixed Reality assumes a smooth integration of the Virtual content generated by a simulation with the real content as it can be observed through the video passthrough of the headset. As the user’s head moves, the location of the virtual content needs to be dynamically adjusted in real time, so that the user gets a full immersive integrated experience. This topic requires solid perception of the environment (obtained through 3D scanning, detection of April tags or other techniques). It requires the optimization of the display location. of the VR content, possibly anticipating the user’s head location and tilt. This is an active field of research. Research developed on the driving simulation application will help develop the field of MR
Deployment Plan
In order to demonstrate the effectiveness of interactive imitation learning in a mixed autonomy setting, we will deploy and demonstrate the system on an autonomous vehicle in this pilot phase. Co-I Loeb has developed an immersive Mixed Reality driving simulator which is compatible with interactive imitation learning. In this setting, a driver can sit in a real vehicle with an AR+VR headset but drive in a simulated world overlaid across the windshield and vehicle windows (see images below). PI Mangharam will extend this deployment to an autonomous EV Go-Kart in Pennovation (see image below) allowing for both autonomous driving and on-line driver feedback [Details: https://tinyurl.com/avgokart22]. Both systems will be trained by experts and evaluated across a panel of 18 non-expert older adults with different driving capabilities – 9 will have mild dementia. PI Loeb will recruit older adults and assess human partners using her custom battery of clinical assessments. The performance of the learning system will be evaluated and improved.
Expected Outcomes/Impacts
This project will have 4 primary outcomes in answering the following questions with human-centric experiments:
1. More efficient Training with XR:
How can we use the XR training system to develop an adaptive curriculum that effectively improves the driver competence. How do we integrate the real controls of the car (steering and pedals) with virtual ones (gear shift, windshield wipers) to provide a valid Mixed Reality experience.
2. Create a Driver Skills Profile:
Can we concurrently identify regions of skill deficiencies to create a customized feedback report for the novice driver? Can the training system suggest scenarios that will improve the novice driver's performance?
3. Tunable Shared Autonomy:
We then develop an autonomous driving (AD) capability based on imitation learning from experts. How can we tune this AD capability to appropriately intervene for different driver profiles?
For example, if a driver can drive competently at performance levels P1 and P2 but cannot drive safely at performance level P3, which has more aggressive/risky driving, then the shared autonomy system should intervene in P3 driving scenarios. This allows the driver to enjoy human-autonomy at levels P1 and P2, but maintain safety at more aggressive P3 driving scenarios.
4. Dynamic Mixed Reality
Mixed Reality assumes a smooth integration of the Virtual content generated by a simulation with the real content as it can be observed through the video passthrough of the headset. As the user’s head moves, the location of the virtual content needs to be dynamically adjusted in real time, so that the user gets a full immersive integrated experience. This topic requires solid perception of the environment (obtained through 3D scanning, detection of April tags or other techniques). It requires the optimization of the display location. of the VR content, possibly anticipating the user’s head location and tilt. This is an active field of research. Research developed on the driving simulation application will help develop the field of MR
Expected Outputs
This project will develop a driver evaluation and assistance system to allow citizens with compromised cognitive capabilities to safety drive vehicles. It will also help tune autonomous driving interventions so drivers retain autonomy until they have the cognitive capability to drive safely. We will demonstrate this with human studies and live experiments on the safe testing setup.
TRID
Imitation Learning techniques enable programming the behavior of agents through demonstrations and are more efficient than reinforcement learning approaches for specific tasks. They are, however, limited by the quality of available demonstration data. Interactive Imitation Learning techniques can improve the efficacy of learning since human experts provide feedback while the agent executes its task. We propose an Interactive Learning technique that uses human feedback in state-space to train and improve agent behavior (as opposed to alternative methods with feedback in action-space). Our method enables providing guidance to the agent in terms of 'changing its state' which is often more intuitive for a human demonstrator. Through continuous improvement via corrective feedback, agents are trained by expert demonstrators to operate at multiple levels of performance (P1, P2, P3,…). Now, when the agent is operating with the non-expert (e.g. older patient with varying level of cognition or trainee driver), the system ensures safety with respect to the particular use case.
Individuals Involved
Email |
Name |
Affiliation |
Role |
Position |
helensloeb@gmail.com |
Loeb, Helen |
Jitsik LLC |
Co-PI |
Other |
rahulm@seas.upenn.edu |
Mangharam, Rahul |
University of Pennsylvania |
PI |
Faculty - Tenured |
Budget
Amount of UTC Funds Awarded
$30000.00
Total Project Budget (from all funding sources)
$30000.00
Documents
Match Sources
No match sources!
Partners
Name |
Type |
Jitsik LLC |
Deployment & Equity Partner Deployment & Equity Partner |