Title: Compositional Reasoning in Robot Learning
Host: Steven Zucker
To carry out diverse tasks in everyday human environments, future robots must generalize beyond the knowledge they are equipped with. However, despite recent advances in “end-to-end” deep learning, today’s robot learning methods are still limited to specializing in one task at a time. At the same time, humans perform everyday tasks with ease. But instead of learning each task in silos, we distill reusable abstractions from our daily experiences and solve new tasks by composing known building blocks. Such compositional reasoning capability is crucial for developing future robots that are both competent and versatile.
In this talk, I will present my works on building compositional reasoning capabilities into robot learning systems. I will start by showing that imposing strong compositional structures (e.g., programs, graphs) on end-to-end robot learning approaches enables systematic generalization across long-horizon object manipulation tasks. Then I will present our recent efforts to relax the structural assumptions in order to bring compositional reasoning closer to real-world settings: learning from unstructured human demonstrations and learning through trial-and-error.
Danfei Xu is a final-year Ph.D. student at Stanford University advised by Fei-Fei Li and Silvio Savarese. His research lies in the intersection of robotics, computer vision, and machine learning. His research goal is to build autonomous agents that can operate in everyday human environments. He obtained a B.S. from Columbia University in 2015 and has spent time at CMU Robotics Institute, Columbia Robotics Lab, Autodesk Research, Zoox, and DeepMind.