From Gambits to Assurances: Game-Theoretic Integration of Safety and Learning for Human-Centered Robotics

From autonomous vehicles navigating busy intersections to quadrupeds deployed in household environments, robots must operate safely and efficiently around people in uncertain and unstructured situations. However, today's robots still struggle to robustly handle low-probability events without becoming overly conservative. In this talk, I will discuss how planning in the joint space of physical and information states (e.g., beliefs) allows robots to make safe, adaptive decisions in human-centered scenarios. I will begin by introducing a unified safety filter framework that combines robust safety analysis with probabilistic reasoning to enable trustworthy human-robot interaction. In particular, I will show how robots can reduce conservativeness without compromising safety by closing their interaction-learning loop. Next, I will present how game-theoretic reinforcement learning tractably synthesizes a safety filter for high-dimensional systems, guarantees training convergence, and reduces the policy's exploitability. Finally, I will introduce a scalable game-theoretic algorithm for optimizing social welfare in multi-agent coordination scenarios. I will conclude with a vision for next-generation human-centered robotic systems that actively align with their human peers and enjoy verifiable safety assurances.