The course focuses on creating a meaningful and challenging design experience for graduate and senior undergraduate electrical engineering, computer science, mechanical engineering, robotics and embedded systems students. The course involves designing, building and testing an autonomous 1/10-scale model F-1 racecar (with 10 times the fun!) using the NVIDIA Jetson platform for real-time perception, control and planning. In addition to providing read-to-use material as a Teaching Kit, the course will introduce an autonomous racing competition in conferences at Embedded Systems Week 2016 and Cyber-Physical Systems Week with challenges testing speed, agility and tracking performance of the on-board vision and control algorithms. Modern robots tend to operate at slow speeds when in complex environments, limiting their utility in high-tempo applications. In this course, students will be tasked with pushing the boundaries of unmanned vehicle speed, decision control and response to fast changes in the environment. Students will work in teams to develop autonomy software to race a converted 1/10 scale RC car equipped with sensors and embedded processing around a large-scale, “real- world” F-1 course. Our goal is to teach embedded GPGPU programming in a fun context of high-speed autonomous racing but with serious constraints of real-time processing, challenging controls and fast robot planning on the NVIDIA Jetson TK1 and TX1 platforms.
The course is divided into the following segments consisting of lectures and lab exercises. During the early part of the course, lectures are conducted to help the students get started. An overview is presented that clarifies the challenges and discusses some potential successful approaches to certain problems. Also, students perform laboratory exercises to gain experience with certain essential functional blocks and their associated hardware and software implementations. The course requires no prior knowledge of the NVIDIA Jetson platform or the Robot Operating System. 1. Lecture 1: Course Introduction: 1.1. Principles of autonomous driving: Toyota’s self driving car, Google car etc.. 1.2. DARPA grand/urban challenges – what worked and what failed 1.3. The autonomous racing car platform – components, coding, testing and performance measurement 1.4. Challenges with driving fast and making the perception, control and planning execute within a deadline. 2. Lecture 2: Introduction to ROS: 2.1. ROS installation on the NVIDIA Jetson 2.2. Lab exercises with ROS 2.2.1. Intra-process communication 2.2.2. Data visualization tools 2.2.3. Robotics perception and planning algorithms 3. Lab Exercise 1: Getting familiar with ROS 3.1. Design software that moves the car backwards, whenever there is an object right in front of the car closer than 2 meters. 3.2. Design and implement a ROS topic: 3.2.1. alarm_mode: with variables object_presence (true/false) and object_distance (in meters). 3.3. Design and implement two ROS nodes: 3.3.1. object_detect: Detects whether there is an object that is in front of the vehicle (20 degree field of view) and less than 2 meters away. Depending on its detection it publishes a message in the alarm_mode topic. 3.3.2. runaway_control: Depending on the message received from the object_detect, it moves the car backwards. 4. Lab Exercise 2: Getting familiar with NVIDIA’s Jetson embedded platform. 5. Lecture 3: Moving the car: 5.1. 5.2. 5.3. 5.4. 6. Lab Exercise 3: Race car platform integration with multiple sensors and Jetson. 7. Lecture 4: Sensing the environment: 7.1. Basics of odometry. 7.2. IMU data 7.3. LIDAR data 7.4. Camera 8. Lab exercise 4: Centerline racing. 8.1. Design two processes: (i) wall detector and (ii) wall follower. Run these processes in the straight corridor in the tunnels. 8.1.1. Wall detector: A perception algorithm that detects the two sides of the walls. 8.2. Wall follower: A control algorithm that, given the detections of the walls, goes down the corridor. 9. Lecture 5-6: GPGPU programming with CUDA: 9.1. NVIDIA CUDA for parallel computation on the GPU 9.2. ArrayFire for GPU-accelerated computing 9.3. VisionWorks toolkit for image accession and analysis 9.4. OpenCV toolkit for computer vision 10. Lab Exercise 5: OpenCV/VisionWorks installation and tutorial and CUDA programming exercises. Overview of perception sensors and the perception chain Overview of PID control basics and description of tracking problems Hands-on tutorial: Acquire data from the sensors on the car Hands-on tutorial and CUDA programming exercises.
Planned Lectures and Labs starting January 2018 Course Introduction, Ross Overview, Labs 1 & 2 Weeks 1-3 Moving the car + Lab 3 Week 4 Sensing the environment Week 5 Lab 4 Week 6 GPGPU programming with CUDA & lab 5+6 Week 7-8 Navigation and planning and platform testing Week 9 Advanced topics + Lab 7 Week 10 Computational considerations Week 11 Time trials and racing practice Week 12-13 Final demo and race Week 14
More details at http://f1tenth.org
The course instructors will develop 10 reference platform vehicles and demonstrate them working at high speeds of 8-15mph in corridors. We will host an international F1/10 Autonomous Racing Competition at CPSWeek in Porto, Portugal around April 10-11, 2018
|firstname.lastname@example.org||Maheshwari, Jalaj||University of Pennsylvania||Other||Student - Masters|
|email@example.com||Mangharam, Rahul||University of Pennsylvania||PI||Other|
|Publication||Computer_Aided_Design_for_Safe_Autonomous_Vehicles.pdf||April 17, 2018, 8:38 a.m.|
|Presentation||F1-10_Overview.pdf||April 17, 2018, 8:38 a.m.|
|Progress Report||96_Progress_Report_2018-03-30||April 17, 2018, 8:38 a.m.|
|Publication||F1_Tenth_ICCPS.pdf||Nov. 30, 2018, 7:58 p.m.|
|Progress Report||96_Progress_Report_2018-09-30||Nov. 30, 2018, 7:59 p.m.|
|Data Management Plan||DataManagement_yeHkqKt.pdf||Feb. 12, 2019, 12:05 p.m.|
|Presentation||F1-10_Autonomous_Racing_slides.pdf||Feb. 12, 2019, 12:13 p.m.|
|Progress Report||96_Progress_Report_2020-09-30||Oct. 5, 2020, 6:35 a.m.|
No match sources!