Login

Project

#354 Creating and Integrating Solutions to Enable the "Complete Trip"


Principal Investigator
Stephen Smith
Status
Completed
Start Date
Jan. 1, 2021
End Date
Sept. 30, 2022
Project Type
Research Applied
Grant Program
FAST Act - Mobility National (2016 - 2022)
Grant Cycle
2021 Mobility UTC "Big Idea"
Visibility
Public

Abstract

One integrating theme that has been promoted in recent years for framing the technology needs of mobility-challenged individuals is that of facilitating the “Complete Trip” [NCMM 2020]. This project proposes the development of a hands-free mobile app that provides complete trip support for such individuals. We focus specifically on complete trip support for wheelchair users but anticipate that most capabilities will be equally useful to vision-impaired individuals.     
Description
Overview

This project will develop and combine work in planning and navigation of wheelchair-friendly routes, autonomous socially-compliant wheelchair navigation and driver assist with real-time obstacle detection, safe intersection crossing, and multi-modal travel to produce a “complete trip” technology package, aimed specifically at wheelchair users but envisioned to be extensible to individuals with other disabilities in the future. The chief deliverable of the project will be a mobile app that integrates pedestrian-friendly route planning, real-time navigation, autonomous wheelchair driver-assist, safe intersection crossing, and coordination of multi-modal travel legs through cloud-based traveler-to-infrastructure (T2I) communication. To facilitate wheelchair users, the mobile app will be hosted on the apple watch in addition to the iPhone, and the two devices will be interoperable. In addition to visual, audio, and haptic modalities for conveying information to the user, speech will be integrated as a principal means of human interaction. We will create  speech-based interfaces to select routes, to command wheelchair navigation and driver-assist subsystems, to select intersection crossing directions, to acquire real-time bus or ride-hailing information, and to communicate synchronization requests for traveler pickups (and drop-offs). The second component of the proposed complete trip technology package is the intelligent wheelchair. This robotic technology will enable real-time obstacle detection, which will be used both to dynamically update the sidewalk database used for future pedestrian-friendly route planning and to autonomously navigate safe passage along the currently planned route in a socially compliant way. Finally, pedestrian location and intersection crossing intent will be used to signal the presence of disabled pedestrians to autonomous and connected vehicles approaching each intersection along the route as progress is made, to enable increased awareness and allow vehicles to take evasive actions if necessary.

The development and validation of this technology vision will be supported by a number of industrial partners. Mechanisms for communicating the presence and routes of pedestrians to approaching vehicles will be established jointly with ARGO AI, and demonstrated using their autonomous vehicle fleet. PathVu has committed to providing both the API to their wheelchair route planning mobile app and access to their sidewalk mapping database, with the expectation of assimilating and transitioning to an Apple watch app that integrates pedestrian-friendly route planning with the intersection crossing capabilities provided by the current PedPal app. Finally, Rapid Flow Technologies will support development and validation of T2I capabilities for multi-modal synchronization of arrival times with buses.

Technical Basis

As indicated above, the proposed complete-trip mobile app will build on a number of prior (and ongoing) component technology development efforts: 

- Safe Intersection Crossing - The PedPal smartphone app [Smith et.al 2019, Smith 2020b], originally developed under the FHWA Accessible Transportation Technology Research Initiative (ATTRI) provides a number of capabilities for safe intersection crossing by pedestrians with disabilities. Essentially, the app knows how fast its user travels and communicates how much time is needed to cross (along with intended crossing direction) to the traffic signal system upon arrival at the intersection. This eliminates the challenges of locating and pushing pedestrian call buttons, and ensures that the pedestrian will have sufficient time to cross when he/she arrives. Using inexpensive bluetooth beacons at each corner of the intersection enables the app to automatically detect pedestrian arrival at a corner. The app then combines this information with MAP Data and Signal Phase and Timing (SPaT) Information that is obtained from the intersection to determine which crossing directions to present as options and to indicate to the pedestrian when it is time to cross. Current work, being conducted under a 2020 Mobility21 UTC sponsored project [Smith 2020a] is extending the use of bluetooth beacons to overcome localization inaccuracies of the mobile app, to enable real-time monitoring of pedestrian crossing progress and dynamic extension of the green time when it is detected to be needed. This current work is also extending PedPal’s mechanisms to more complex intersections with protected left turn phases and (collaboratively with Rapid Flow Technologies) is developing a modular and fully scalable, cloud-based T2I communication framework.

- Real-time Traffic Signal Control - The functionality provided by PedPal derives from its T2I  integration Surtrac [Xie et. al 2012, Smith et. al 2013, Smith 2020b], a decentralized adaptive traffic signal control system developed originally at CMU and now provided commercially by Rapid Flow Technologies.

- Pedestrian Friendly Route Planning - The PathVu SafeSidewalk Toolbox  [PathVu 2020] includes a smartphone app for generating and following wheelchair friendly routes from point a to point b. This app, which was also developed under the FHWA ATTRI program, draws on PathVu’s underlying sidewalk mapping database to produce routes that avoid obstacles like crumbling (or missing) pavement, lack of curb cut access, temporary construction sites, etc. The app allows users to communicate any changes they detect in the status of sidewalk maps (e.g, the presence of a new obstacle) back to the PathVu server so that the sidewalk database can be kept up to date between periodic mapping sweeps.

- Autonomous obstacle detection and classification - To enable real-time detection, tracking and classification of obstacles, we will build on previously developed computer vision and machine learning algorithms for extracting and classifying objects from video data, under assumption that a camera is mounted on a wheelchair and is continuously capturing visual data [Tamburo et al 2014; Reddy et al 2018; Choe et al 2018]. The distance to the detected obstacles can also be estimated with a standard stereo system or a trained monocular system [Zhi et al 2018]. These are low latency methods that were designed to perform with moving cameras to detect dynamically moving objects in the presence of occlusion. The location, trajectory, and distance to dynamic moving obstacles such as individual pedestrians, bicyclists, and stationary obstacles such as potholes, barriers, etc., can be made immediately available to both path planning and collision avoidance subsystems on-board and to a global sidewalk mapping database for dynamic updates.

- Autonomous wheelchair navigation - Our previous work has produced a self-driving wheelchair system, consisting of a state estimation and mapping module, a terrain traversability analysis module, and a planning and collision avoidance module. In this system, state estimation and mapping leverages range, vision, and inertial sensing [Zhang et al. 2018], and planning and collision avoidance uses a trajectory library to maximize the likelihood to reach the goal point, finding collision-free paths within a short amount of time (<1ms) with a small system latency [Zhang et al. 2020]. The system can operate in either a fully autonomous mode, where it navigates to a given goal point, or in an assist mode where guidance from a human operator through a joystick controller can be interjected to prevent collisions. 

- Socially-compliant wheelchair navigation - Wheelchair users will need to navigate among other pedestrians. We will use the state-of-the-art algorithms for navigation among human crowds developed under the Robotics Collaborative Technology Alliance (RCTA) program and ongoing program with GVSC [Vemula et al. 2017;Vemula et al. 2018;Yao et al. 2019;Tsai & Oh 2020].

- High-level (speech-driven) commanding of robotic systems - The natural language communication support for wheelchair navigation will leverage our prior work developed under the Robotics Collaborative Technology Alliance (RCTA) program [Oh et al. 2015; Oh et al. 2016;Tian & Oh 2019]. The intelligence architecture supports the use of natural language for users to command a robotic vehicle to navigate, specify path or terrain preferences, or receive the updates from the vehicle.

- Natural and Accessible Mobile User-Interfaces - In addition to developing a speech-based interface, we will design complementary non-speech interactions that build on the design of user-appropriate physical interfaces for wheelchair users. In this regard, we will leverage prior and ongoing work [Carrington et al. 2014; Carrington et al. 2016] to design and develop natural interfaces for people with motor disabilities to work effectively with the robotic systems.

Proposed Research

We propose to focus specifically on developing and demonstrating a complete-trip solution for wheelchair users, but our expectation is that many capabilities provided by our solution will be directly applicable to pedestrians with other disabilities as well (most notably to vision impaired individuals). As stated previously, our technology vision is a hands-free mobile app, hosted on an Apple watch, that integrates and provides functionality for pedestrian-friendly route generation, real-time navigation and obstacle detection, autonomous wheelchair driver-assist, safe intersection crossing, and coordination of multi-modal travel legs, all through cloud-based traveler-to-infrastructure (T2I) communication. 

To achieve this vision our research will focus on the following technical challenges:
- An accessible, hands-free, mobile app interface for route planning, navigation and intersection crossing
- A wheelchair platform for autonomous obstacle detection and driver assist
- Use of cloud-based T2I communication to inform approaching vehicles of pedestrian presence and intentions at signalized intersections 
- Use of T2I communication together with real-time signal control to coordinate pedestrian arrival with target transit vehicles at near-side bus stops

In the paragraphs below, we discuss our approach to each of these challenges in more detail and summarize our testing and evaluation plans.

A first area of focus will be the development of an accessible, hands-free (wearable) mobile app interface hosted on an Apple watch device. Both PedPal and PathVu’s smartphone app rely strongly on user interaction through visual interfaces, while (at least in PedPal’s case) providing voiceover and haptic modalities for individuals with disabilities such as vision impairment. However both visual and voiceover-based interaction with the user on a physical device as small as an Apple watch is challenging. Instead we propose to utilize speech as a principal mode of user interaction, and to develop a speech-based interface to the route planning, navigation and intersection crossing capabilities of the mobile app. We also realize that all requisite functionality may not be possible via the Apple watch alone, and accordingly will establish interoperability between both Apple watch and iPhone app platforms, and enable the user to exploit both as the situation warrants. We will continue to seek to exploit multiple modalities to enhance interaction. For example, we anticipate using haptic patterns to inform the user of certain anomalous situations (e.g., veering outside of the crosswalk).

A second area of technical focus will concern development of an autonomous wheelchair platform for (1) real-time obstacle detection and characterization, and (2) autonomous navigation. We discuss each of these aspects in turn. First, our approach to obstacle detection will exploit visual data that is continuously captured from one or more cameras (and possibly additional sensors) that are physically mounted on the wheelchair. Our approach uses geometric and rigidity constraints on structured points and unstructured points in the collected images in a deep learning framework to determine final relevant key point locations. Trajectories are also computed from frame to frame [Reddy et al 2018]. These methods have proven to provide state-of-the art results for vehicle detection, tracking, and classification, and can be easily retrained and adapted for other objects (i.e., obstacles). We will also explore acquisition of obstacle distance information, which can be computed from either a stereoscopic vision system or a trained monocular vision system [Zhi et al 2018; Reddy 2018], and provides robust data for path planning, collision warning, and estimates of speed. Finally, we will develop a method for producing probabilistic predictions of where a detected obstacle will move in the future. This will require a hybrid model that integrates instantaneous information (e.g., location, speed), with longitudinal data that describes typical motion paths for different types of obstacles at specific locations. We will develop mechanisms for communicating detected obstacles to autonomous navigation subsystems onboard the wheelchair (see below). More complex obstacles that exhibit some measure of permanence,such as a closed sidewalk, a pothole, construction, etc., will be tracked over time, with this information being used to keep PathVu’s global sidewalk database current. As in past work, we will emphasize algorithms, software and hardware that minimize latency [Tamburo et al 2014; Reddy et al 2018]. Field tests will be performed to validate the robustness (accuracy and execution time) of the algorithms and to validate that the transmission of the information to the PathVu database is timely, secure, and uncorrupted.

For autonomous navigation and support of wheelchair driver assist, we propose to incorporate a prior map for the autonomous wheelchair system to navigate safely on sidewalks, through curb cuts, and cross intersections within the crosswalk lines. A prior map can be initially generated using our prior work [Zhang et al. 2018], and then further processed to associate sidewalk/crosswalk line boundaries, bus stops, and other various points of interest. One goal will be to understand how to unify such maps with those used by the PathVu sidewalk database to perform pedestrian-friendly routing. The autonomous navigation algorithm will also take dynamically moving pedestrians into account to ensure both safety and comfort. Whereas current work assumes that the set of moving objects is homogeneous [Vemula2018;Tsai2020], i.e., consisting of teams of like robots, the proposed work will address the challenges of navigating around different types of agents on the sidewalk, e.g.,  pedestrians, wheelchairs, or small-sized electronic vehicles. The learned models must also be highly adaptive to changing social contexts and the types of pedestrians. Toward this end we will use simulation to train for a wide variety of scenarios and contexts, before moving on to physical robot experiments. Finally, we will develop a mixed-initiative system that enables a user to more intuitively interact with the autonomous wheelchair commanding system where a user specifies a general, high-level goal and the system reasons about low-level constraints to achieve the goal. When human inputs are received by the system, e.g., through a joystick controller input or speech or touchscreen inputs on a tablet or smartphone, the system adapts its plan to leverage the human intention and navigation task, taking into account obstacles in the environment. The overall plan fulfills the navigation task and is locally biased by the human inputs it receives.

A third area of research will center around the use of real-time communication between the mobile app and approaching vehicles to provide greater visibility of pedestrian presence at the intersection. With the emergence of connected vehicle technologies, there are clear opportunities for improving the situation awareness of vehicles as they approach intersections by simply broadcasting the current location and crossing intent of mobile app users at the intersection. In the case of autonomous connected vehicles (CAVs), extended pedestrian route information across intersections may also be useful for longer-term predictive purposes. Working with Argo AI, we will focus on capabilities in two areas. The first concerns use of real-time information on the location and crossing direction of connected pedestrians (communicated by the proposed mobile app to an Argo vehicle) to update the vehicle’s world model and adjust course if necessary. The second concerns reciprocal communication and use of information by the mobile app, including vehicle approach direction, its estimated time of arrival information at the intersection, and an indication of other approaching traffic that the vehicle is currently sensing in its proximity to increase pedestrian’s awareness of the timing and volume of approaching traffic. In both cases, field tests will be performed to demonstrate benefits.  

A final area of research will focus on extending the app’s real-time T2I communication capabilities to support coordination of pedestrian travel legs with trip segments that require connection with public transit service. We will integrate with Transit data streams (https://transitapp.com) to provide pedestrians with real-time information on bus arrival times at target bus stops. As both bus and pedestrian approach the target bus stop, the viability of two capabilities for promoting synchronization will be explored. First we will develop the capability for directly broadcasting the pedestrian’s expected arrival time to the target bus to alert the driver of the pedestrian’s intention to catch the bus. Second, in the case that a near-side bus stop across the street from the pedestrian at a signalized intersection is the target, we will work with Rapid Flow Technologies to develop a mechanism that allows extension of the green crossing phase for purposes of holding the bus at the stop until the pedestrian is able to cross the intersection and make it there. We will design and conduct field experiments aimed at demonstrating the benefits of this capability on overall trip travel time.

References

[Carrington et al., 2014] Patrick Carrington, Amy Hurst, and Shaun K. Kane. 2014. Wearables and chairables: inclusive design of mobile input and output techniques for power wheelchair users. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 3103–3112. DOI:https://doi.org/10.1145/2556288.2557237
[Carrington et al., 2016] Patrick Carrington, Jian-Ming Chang, Kevin Chang, Catherine Hornback, Amy Hurst, and Shaun K. Kane. 2016. The Gest-Rest Family: Exploring Input Possibilities for Wheelchair Armrests. ACM Trans. Access. Comput. 8, 3, Article 12 (May 2016), 24 pages. DOI:https://doi.org/10.1145/2873062
[Choe et al 2018] Gyeongmin Choe, Seong-Heum Kim, Sunghoon Im, Joon-Young Lee, Srinivasa G. Narasimhan, and In So Kweon. RANUS: RGB and NIR urban scene dataset for deep scene parsing. IEEE Robotics and Automation Letters, 3(3):1808–1815, 2018. 
[MCMM 2020] THE COMPLETE TRIP: Helping Customers Make a Seamless Journey, National Center for Mobility Management, 2020.
[Oh et al., 2016] J. Oh, M. Zhu, S. Park, T.M. Howard, M.R. Walter, D. Barber, O. Romero, A. Suppe, L. Navarro-Serment, F. Duvallet, A. Boularias, J. Vinokurov, T. Keegan, R. Dean, C. Lennon, B. Bodt, M. Childers, J. Shi, K. Daniilidis, N. Roy, C. Lebiere, M. Hebert, and A. Stentz. Integrated intelligence for human-robot teams. In Proc. of International Symposium on Experimental Robotics (ISER) 2016
[Oh et al., 2015] J. Oh, A. Suppe, F. Duvallet, A. Boularias, J. Vinokurov, L. Navarro-Serment, O. Romero, R. Dean, C. Lebiere, M. Hebert, and A. Stentz. Toward mobile robots reasoning like humans. In Proc. of AAAI Conference on Artificial Intelligence (AAAI), 2015.
[PathVu 2020]  http://www.pathvu.com
[Reddy et al 2018] N. Dinesh Reddy, Minh Vo, and Srinivasa G. Narasimhan.  Carfusion: Combining point tracking and part detection for dynamic 3d reconstruction of vehicles. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
[Smith 2020a] Smith, S.F., Mobility21 Project # 333: Safe Intersection Crossing for Pedestrians With Disabilities. July 2020 - June 2021.
[Smith 2020b] Smith, S.F., “Smart Infrastructure for Future Urban Mobility, AI Magazine, 41(1) Spring 2020.
[Smith et.al 2019] Smith, S.F., Rubinstein, Z.B, Marusek, J., Dias, B, and Radewick, H., “Connecting Pedestrians with Disabilities to Adaptive Signal Control for Safe Intersection Crossing and Enhanced Mobility: Final Report”, Technical Report FHWA-JPO-19-754, September, 2019.
[Smith et.al 2013] Smith, S.F. G.J. Barlow, X-F Xie, and Z.B. Rubinstein, “ Smart Urban Signal Networks: Initial Application of the SURTRAC Adaptive Traffic Signal Control System”, Proceedings 23rd International Conference on Automated Planning and Scheduling, Rome, Italy, June 2013.
[Tamburo et al 2014] Robert Tamburo, Eriko Nurvitadhi, Abhishek Chugh, Mei Chen, Anthony Rowe, Takeo Kanade, and Srinivasa G. Narasimhan.  Programmable automotive headlights.  In Computer Vision - ECCV 2014 -13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV, pages 750–765, 2014.
[Tsai & Oh, 2020] T.-E. Tsai and J. Oh  A Generative Approach for Socially Compliant Navigation, In: IEEE Conference on Robotics and Automation (ICRA). 2020.
[Tian & Oh 2019] J. Tian and J. Oh  Image Captioning with Compositional Neural Module Networks, International Joint Conference on Artificial Intelligence (IJCAI) 2019
[Vemula et al., 2018] A. Vemula, K. Muelling, J. Oh. Social Attention: Modeling Attention in Human Crowds. In Proc. of IEEE Conference on Robotics and Automation (ICRA), 2018
[Vemula et al., 2017] A. Vemula, K. Muelling, J. Oh. Modeling cooperative navigation in dense human crowds. In Proc. of IEEE Conference on Robotics and Automation (ICRA), 2017
[Vo et al 2020] Minh Vo, Ersin Yumer, Kalyan Sunkavalli, Sunil Hadap, Yaser Sheikh, and Srinivasa Narasimhan. Automatic Adaptation of Person Association for Multiview Tracking in Group Activities. IEEE TPAMI 2020
[Xie et. al 2012] Xie, X-F, S.F. Smith, L Lu, and G.J. Barlow, “Schedule-Driven Intersection Control”, Transportation Research Part C: Emerging Technologies, 24: 168-189, October 2012.
[Yao et al., 2019] X. Yao, J. Zhang, and J. Oh  Following Social Groups: Socially-Compliant Autonomous Navigation in Dense Crowds, In: Cognitive Vehicles Workshop at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2019.
[Zhang et al. 2018] J. Zhang and S. Singh. Laser-visual-inertial Odometry and Mapping with High Robustness and Low Drift. Journal of Field Robotics. vol. 35, no. 8, pp. 1242–1264, 2018.
[Zhang et al. 2020] J. Zhang, C. Hu, R. Gupta Chadha, and S. Singh. Falco: Fast Likelihood-based Collision Avoidance with Extension to Human-guided Navigation. Journal of Field Robotics. 2020.
[Zhi et al 2018] Tiancheng Zhi, Bernardo R. Pires, Martial Hebert, and Srinivasa G. Narasimhan. Deep material-aware cross-spectral stereo matching. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
Timeline
The above research agenda will be carried out according to the following timeline:

[January 2021 - June 2021] - During the first 6 months of the project, work will focus on:

- Design of a multi-modal user interface for hosting the mobile app on an Apple watch and demonstration of the viability of a robust, speech-based, user interaction model.
- Configuration of camera-based wheelchair sensing platform, and collection of video stream data capturing various types of obstacles, and preliminary analysis of relevant features
- Design and development of a navigation system for pedestrians of different mobility types
- Integration of PedPal safe intersection crossing functionality with PathVu sidewalk database, route planning and navigation capabilities 
- (Validation and delivery of scalable, cloud-based traveler-to-infrastructure (T2I) communication mechanism, produced as a deliverable of parallel 2020-21 Mobility21 project # 333 effort)

[July 2021 - December 2021] - During the second 6 months of the project, work will focus on:

- Development, testing and refinement of Apple watch interface that integrates route planning, real-time navigation and safe intersection crossing 
- Development (via machine learning) of classifiers for real-time obstacle detection, together with mechanisms for communicating detected objects both to real-time wheelchair path planning and navigation subsystems, and to update global sidewalk mapping data.
- Mechanisms for autonomous wheelchair driver assist and interface from high-level, speech-driven commanding
- Design and development of protocols for communicating pedestrian location and crossing intent information to approaching autonomous and connected vehicles (along with mechanisms for incorporating this information and taking evasive action as necessary), and for reciprocal communication of vehicle approach and expected intersection arrival time to the pedestrian app to increase awareness of oncoming traffic.
- Design and development of mechanisms for acquiring transit vehicle arrival times, and utilizing extended green time at the intersection to hold a target bus for one or more pedestrians crossing the intersection to catch it.

[January 2022 - June 2022] - During the last 6 months of the project, work will focus on:

- Integration of Apple watch app with respective Pedpal and PathVu smartphone apps to enable substitutability of constituent functionality across apps.
- Field testing and refinement of autonomous wheelchair technology for speech-driven driver assist.
- Field testing, refinement, and demonstration of the safety benefits of pedestrian to vehicle communication 
- Field testing, refinement and demonstration of the benefits of coordinating pedestrian crossing times with target bus arrivals at near side bus stops.
- Integrated demonstration of complete trip scenario.
Strategic Description / RD&T

    
Deployment Plan
The most likely paths to deployment of the complete trip technologies we propose to develop will be through our deployment partners, and to promote this eventuality, all technology results produced under this effort will be provided as open source. This will allow PathVu to directly exploit the mobile app’s integration with their sidewalk mapping database, to further develop its integrated plath planning and safe intersection crossing functionality, and to integrate the app into its current product offerings. It will also give PathVu the option to further automate the collection of sidewalk data through adoption of robotic wheelchair sensing and obstacle detection technologies that we develop.

Similarly, Rapid Flow Technologies is incentivized to incorporate the extended component capabilities to be developed for safe intersection crossing, such as support for synchronizing pedestrian crossings with target bus arrivals, and real-time pedestrian communication with approaching vehicles. Rapid Flow’s basic approach to deployment of the current PedPal technology (which we expect to be the same for extended capability versions of the app) is to provide it free of charge to municipalities who purchase the Surtrac traffic control system, on the assumption that it provides added incentive to municipalities to provide increased safety and ease of travel to their communities with disabilities. 

Finally, Argo AI will be ideally positioned to adopt capabilities of the developed app related to pedestrian-to-vehicle communication of location and crossing intent, and vehicle-to-pedestrian communication of approach and expected arrival times of approaching vehicles. One goal of the proposed research will be to modularize these P2V capabilities so that they are operable with any traffic control system in place at the intersection.

At the same time, there are additional considerations that must be addressed to enable practical deployment of the proposed mobile app technology to prospective users. First, the fact that the proposed mobile app (like PedPal) will provide the ability to set the user’s speed and, in return, get the crossing time that is implied is a feature that is appropriate for pedestrians with disabilities but not necessarily for any pedestrian that might gain access to the app. Thus, it seems unrealistic that the mobile app should be disseminated by simply making it available at the Apple App Store (or through the Google Play Store, once the technology has been ported). Instead it seems more reasonable to assume that access to the mobile app will be mediated by some sort of user registration site, where eligibility can be determined in advance of access to download it. Second, it seems necessary to provide some sort of registry of intersections that have been enabled to support the app in a given municipality or geographic area where the app has been made available. We will develop appropriate mechanisms for addressing both of these practical deployment issues.
Expected Outcomes/Impacts
We expect the proposed research to produce the following high-level results:

(1) A deployable version of the mobile app that supports pedestrian friendly route planning and navigation, safe intersection crossing, pedestrian to vehicle connectivity around the intersection and multi-leg coordination through an interface that includes speech, haptic and visual/voiceover interaction modalities.
(2) A second prototype version of the app that additionally incorporates more advanced, automated wheelchair obstacle detection and driver assist capabilities.
(3) A capstone field demonstration of an individual utilizing the app to accomplish a multi-modal trip.

To validate our technology results and to demonstrate the benefits of various complete trip solution components, we will periodically carry out field experiments at and around the complex of Surtrac-controlled intersections at Centre/Cypress, Baum/Cypress, Aiken/Centre, Baum/Liberty and Baum/Aiken, and, in some cases, at the isolated intersection at Centre/Highland (these intersections are all currently equipped with the capability to broadcast MAP and SPaT messages to approaching vehicles and pedestrians). With respect to the Apple watch interface, we will collect data to measure ease of use, particularly for pedestrians that have challenges operating a smartphone while traversing a route or crossing an intersection. For new capabilities provided for pedestrian-to-vehicle communication, multi-leg coordination with transit vehicles, and autonomous wheelchair obstacle detection and driver assist, we will develop and execute various scenarios in the field aimed at demonstrating both safety and efficiency benefits.
Expected Outputs

    
TRID


    

Individuals Involved

Email Name Affiliation Role Position
sancha@andrew.cmu.edu Ancha, Sid Carnegie Mellon University Other Student - PhD
pcarrington@cmu.edu Carrington, Patrick Carnegie Mellon UniversitySCS - HCII Co-PI Faculty - Untenured, Tenure Track
awd@cs.cmu.edu Dubrawski, Artur Carnegie Mellon University Co-PI Faculty - Tenured
rhata23@colby.edu Hata, Rayna Colby Other Student - Undergrad
samiulh@andrew.cmu.edu Hoque, Samiul Carnegie Mellon University Other Student - Undergrad
amisra@andrew.cmu.edu Mistra, Ashwin Carnegie Mellon University Other Student - Masters
dnarapur@andrew.cmu.edu Narapureddy, Dinesh Carnegie Mellon University Other Student - PhD
srinivas@andrew.cmu.edu Narasimhan, Srinivasa Carnegie Mellon University Co-PI Faculty - Tenured
jeanoh@nrec.ri.cmu.edu Oh, Jean Carnegie Mellon University Co-PI Faculty - Untenured, Tenure Track
gauravp@andrew.cmu.edu Pathak, Gaurav Carnegie Mellon University Other Student - Masters
zbr@cs.cmu.edu Rubinstein, Zachary Carnegie Mellon University Co-PI Faculty - Research/Systems
sfs@cs.cmu.edu Smith, Stephen Carnegie Mellon University PI Faculty - Tenured
robert.tamburo@gmail.com Tamburo, Robert Carnegie Mellon University Co-PI Faculty - Research/Systems
zhanxinwu@link.cuhk.edu.cn Wu, Zhanxin CUHK Other Student - Undergrad
xinjieya@andrew.cmu.edu Yao, Abby (Xinjie) Carnegie Mellon University Other Student - Masters
zhangji@andrew.cmu.edu Zhang, Ji Carnegie Mellon University Co-PI Faculty - Research/Systems

Budget

Amount of UTC Funds Awarded
$571323.00
Total Project Budget (from all funding sources)
$1142648.00

Documents

Type Name Uploaded
Data Management Plan data-management-plan_aHLvpVQ.pdf Nov. 24, 2020, 8:04 p.m.
Project Brief BigIdeasSlides.pptx Nov. 25, 2020, 4:32 a.m.
Publication Snow Plow Route Optimization: A Constraint Programming Approach April 8, 2021, 8:35 a.m.
Publication Technology to Make Signalized Intersections Safer for Pedestrians with Disabilities April 8, 2021, 8:35 a.m.
Presentation Creating and Integrating Solutions to Enable the "Complete Trip" April 8, 2021, 8:35 a.m.
Progress Report 354_Progress_Report_2021-03-31 April 8, 2021, 8:35 a.m.
Publication Traffic4D: Single View Reconstruction of Repetitious Activity Using Longitudinal Self-Supervision Sept. 30, 2021, 8:13 p.m.
Publication TesseTrack: End-to-End Learnable Multi-Person Articulated 3D Pose Tracking Sept. 30, 2021, 8:13 p.m.
Progress Report 354_Progress_Report_2021-09-30 Sept. 30, 2021, 8:52 p.m.
Publication Safe Intersection Crossing for Pedestrians With Disabilities Feb. 2, 2022, 6:13 a.m.
Publication Integration of Automated Vehicle Sensing with Adaptive Signal Control for Enhanced Mobility Feb. 9, 2022, 5:43 a.m.
Publication Technology to Make Signalized Intersections Safer for Pedestrians with Disabilities Feb. 9, 2022, 5:44 a.m.
Publication Multiagent sensor fusion for connected & autonomous vehicles to enhance navigation safety Feb. 9, 2022, 5:46 a.m.
Publication Connecting Pedestrians with Disabilities to Adaptive Signal Control for Safe Intersection Crossing and Enhanced Mobility: Final Report [2019] Feb. 9, 2022, 5:48 a.m.
Final Report 354_-_Final_Report.pdf Jan. 18, 2023, 6:31 a.m.
Publication HIGH RESOLUTION DIFFUSE OPTICAL TOMOGRAPHY USING SHORT RANGE INDIRECT IMAGING March 30, 2023, 5:40 a.m.
Publication The Digital Twin Landscape at the Crossroads of Predictive Maintenance, Machine Learning and Physics Based Modeling March 30, 2023, 5:41 a.m.
Publication Robot Synesthesia: A Sound and Emotion Guided AI Painter March 30, 2023, 6:23 a.m.
Publication Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation March 30, 2023, 6:23 a.m.
Publication Distribution-aware Goal Prediction and Conformant Model-based Planning for Safe Autonomous Driving March 30, 2023, 6:24 a.m.
Publication Challenges in Close-Proximity Safe and Seamless Operation of Manned and Unmanned Aircraft in Shared Airspace March 30, 2023, 6:25 a.m.
Publication FAR planner: Fast, attemptable route planner using dynamic visibility update March 30, 2023, 6:26 a.m.
Publication Social-PatteRNN: Socially-Aware Trajectory Prediction Guided by Motion Patterns March 30, 2023, 6:26 a.m.
Publication Rca: Ride comfort-aware visual navigation via self-supervised learning March 30, 2023, 6:27 a.m.
Publication Core challenges in embodied vision-language planning March 30, 2023, 6:28 a.m.
Publication T2FPV: Constructing High-Fidelity First-Person View Datasets From Real-World Pedestrian Trajectories March 30, 2023, 6:28 a.m.
Publication UGV-UAV Object Geolocation in Unstructured Environments March 30, 2023, 6:29 a.m.
Publication An Intelligence Architecture for Grounded Language Communication with Field Robots March 30, 2023, 6:29 a.m.
Publication Noticing motion patterns: A temporal cnn with a novel convolution operator for human trajectory prediction March 30, 2023, 6:30 a.m.
Publication Trajformer: Trajectory prediction with local self-attentive contexts for autonomous driving March 30, 2023, 6:30 a.m.
Publication iSimLoc: Visual Global Localization for Previously Unseen Environments with Simulated Images March 30, 2023, 6:44 a.m.
Publication Laser Scanner with Real-Time, Online Ego-motion Estimation March 30, 2023, 6:46 a.m.
Publication TARE: A Hierarchical Framework for Efficiently Exploring Complex 3D Environments March 30, 2023, 6:47 a.m.
Publication Methods and systems for adaptive traffic control April 10, 2023, 8:16 p.m.
Publication Connection-Based Scheduling for Real-Time Intersection Control April 10, 2023, 8:18 p.m.
Publication Distributed, multi-domain option generation across legacy planners April 10, 2023, 8:20 p.m.
Publication Scheduling for multi-robot routing with blocking and enabling constraints April 10, 2023, 8:22 p.m.

Match Sources

No match sources!

Partners

Name Type
pathVu - Pathway Accessiblity Solutions, Inc Deployment Partner Deployment Partner
rapid Flow Technologies Deployment Partner Deployment Partner
Argo AI Deployment Partner Deployment Partner