Smart transportation solutions require that the computer control that supposedly achieves this smart functionality actually is reliably smart. For example, a “smart” vehicular safety controller that tries to prevent rear-ending the car ahead by changing lanes into an open spot in the middle lane would just cause a crash if all other cars also decided to avoid collisions by moving to the middle lane. In such situations, a car control strategy that looks smart locally turns out to be fatal globally. We, thus, need to find good ways of ensuring that bugs in smart car control solutions will never cause serious problems. For connected cars or infrastructure broadcasts, V2V and V2I communication has an influence on the car. Phenomena of local information versus global impact also need to be taken into account. Validation for complex systems has primarily been limited to simulation, which can only cover a minuscule fraction of the relevant state space. For answering crucial safety and correctness questions about smart transportation systems more thoroughly, we propose to extend and develop verification techniques based on logic. Our approach manages system complexity by successively reducing complex physical systems in a modular way to simpler elements. Our approach is unique in that it provides proofs as evidence for correctness, which are also useful for certification purposes. We have so far successfully applied our techniques to verify collision freedom properties in an automotive car controller, a distributed adaptive cruise control system, and a CICAS-style intersection collision avoidance control system.
Safety-critical traffic and autonomous systems are becoming increasingly dependent on complex interactions with computers. Safety systems such as adaptive cruise control, emergency braking and collision mitigation are becoming household terms, as family cars are equipped and sold with these devices. However, with this increased complexity, it is far more challenging to ensure the safe and accurate functioning of these devices, especially as many of them begin to interact on our roadways in a wide range of situations including remotely. Our research focuses on ensuring that these and other systems operate safely in all situations, even those that are not directly conceived by the designers of the system. To tackle this issue, we apply formal verification techniques, which allow us to either produce a conclusive proof that the system is safe in all situations or provide a counter example. However, these methods are only useful if they are powerful and robust enough to verify the computers which actually control our cars. To this end, we collaborate with researchers in the Engineering department at CMU and other experts to investigate how to create the right abstractions to translate systems which are currently too complex to prove directly into provably safe systems, without lessening the strength of our safety guarantees on the original system (published at the 2012 Americam Controls Conference). In another collaboration, we used formal methods to determine the minimum resolution required for an onboard camera to detect road signs early enough to make appropriate control choices (published at the 2012 International Conference on Cyber-Physical Systems). Our effort in modeling and collaborative verification of hybrid systems resulted in a collaboration with researchers at Cambridge and LFCS Edinburgh (published at the 2013 Workshop on Enabling Domain Experts to use Formalised Reasoning). Results We formally studied intelligent speed adaptation and incident management systems for freeway traffic control. We studied how cyber-physical system technology can help improve freeway traffic by combining local car GPS positioning, traffic center control decisions, and communication to achieve more tightly coupled feedback control in intelligent speed adaptation. We developed models for an intelligent speed adaptation that respects variable speed limit control and incident management. We identified safe ranges for crucial design parameters in these systems and, using the theorem prover KeYmaera, formally verified safety of the resulting cyber-physical system models. Finally, we showed how those parameter ranges could be used to decide trade-offs for practical system implementations even for design parameters that are not modeled formally. We presented a new approach for leveraging the power of theorem provers for formal verification to provide sufficient conditions that can be checked on embedded control designs. Theorem provers are often most efficient when using generic models that abstract away many of the controller details, but with these abstract models very general conditions can be verified under which desirable properties such as safety can be guaranteed for the closed-loop system. We propose an approach in which these sufficient conditions are static conditions that can be checked for the specific controller design, without having to include the dynamics of the plant. We demonstrate this approach using the KeYmaera theorem prover for differential dynamic logic for two examples: an intelligent cruise controller and a cooperative intersection collision avoidance system (CICAS) for left-turn assist. In each case, safety of the closed-loop system proved using KeYmaera provides static sufficient conditions that are checked for the controller design. We studied general operating conditions, including inaccurate sensing, actuator disturbance, and sensor failure with an obstacle avoidance system for autonomous robotic ground vehicles. We modeled the behavior of autonomous vehicles that are equipped with the Dynamic Window Approach for obstacle and collision avoidance and formally studied several safety properties in the presence of stationary and moving obstacles. We verified safety using the theorem prover KeYmaera. Furthermore, we formally verified safety of the algorithm even in the presence of bounded sensor inaccuracy (i.e., inaccurate position measurement, such as GPS), actuator disturbance (i.e., braking and acceleration may be damped), and partial failure of the position sensors (e.g., GPS fails, but wheel encoders still allow dead reckoning). We are developing “Foundations of cyber-physical systems” as an undergraduate course on the formal verification of cyber-physical systems. The course introduces formal verification techniques and hybrid system modeling, with a special focus on autonomous robots as a running example throughout the course. This course will be held in the Fall 2013.
|Final Report||270.pdf||Dec. 12, 2018, 5:37 a.m.|
|Publication||270_-_Logical_Analysis_of_Hybrid_Systems_journal_of_automata.pdf||Jan. 10, 2019, 6:29 a.m.|
No match sources!