Skip to main content
Browse by:
GROUP

Certifiable Neural Control for Safe Autonomy

Event Image
Thursday, March 21, 2024
9:00 am - 10:00 am
Wei Xiao

Safety is central to autonomous systems since a single failure could lead to catastrophic results. In unstructured complex environments where system states and environment information are not available, the safety-critical control problem is much more challenging. In this talk, I will first discuss safety from a control theoretic perspective with Control Barrier Functions (CBFs). CBFs capture the evolution of the safety requirements during the execution of a control system and can be used to guarantee safety for all times due to their forward invariance. Next, this talk will introduce an approach for extending the use of CBFs to machine learning-based control, using differentiable CBFs that are end-to-end trainable and adaptively guarantee safety using environmental dependencies. These novel safety layers give rise to new neural network (NN) architectures such as what we have termed BarrierNet. In machine learning, especially for safety-critical control applications, the interpretability of a NN is crucial. The talk will further introduce a novel method called invariance set propagation through the NN. This approach enables causal manipulation of the NN's parameters or inputs with respect to output specifications, as well as introducing guarantees. Finally, this talk will show how we may achieve safety-critical planning and control using more powerful generative AI, such as diffusion models, for generalizable autonomy. These techniques have been applied to various robots, such as autonomous ground vehicles, vessels, and flight vehicles, legged robots, robot swarms, soft robots, and manipulators.

Contact: Glenda Hester