Talk 4

Certified Models and Controllers for Trustworthy HRIby Changliu Liu

Abstract: This talk will share some of our recent work on certified models and controllers that enable autonomous robotic systems to safely operate in uncertain and human-involved environments. The safety specification can be written as constraints on the system’s state space. To ensure that these constraints are satisfied throughout time, the robot needs to correctly anticipate the human behavior and only select the action that will not lead to a state that violates the constraints. This talk will focus on two aspects of the problem: 1) how to perform provably safe control in real time with learned models and 2) how to obtain a certified human model with limited amount of data. For the first aspect, I will introduce a safe control method that ensure forward invariance inside the safety constraint with black-box dynamic models (e.g., deep neural networks). For the second aspect, I will introduce a verification-guided learning method that performs more learning on most vulnerable parts of the model. The computations that involve deep neural networks are handled by our toolbox NeuralVerification.jl, a sound verification toolbox that can check input-output properties of deep neural networks. I will conclude the talk with future visions.

Bio: Changliu Liu is an assistant professor in the Robotics Institute, School of Computer Science, Carnegie Mellon University (CMU), where she leads the Intelligent Control Lab. Prior to joining CMU, Dr. Liu was a postdoc at Stanford Intelligent Systems Laboratory. She received her Ph.D. from University of California at Berkeley and her bachelor degrees from Tsinghua University. Her research interests lie in the design and verification of intelligent systems with applications to manufacturing and transportation. She published the book “Designing robot behavior in human-robot interactions” with CRC Press in 2019. Her work is recognized by NSF Career Award, Amazon Research Award, and Ford URP Award.