Host: Dragomir Radev
Language Grounding with Robots
We use language to refer to objects like “toast”, “plate”, and “table” and to communicate requests such as “Could you make breakfast?” In this talk, I will present work on computational methods to tie language to physical, grounded meaning. Robots are an ideal platform for such work because they can perceive and interact with the world. I will discuss dialog and learning strategies I have developed to enable robots to learn from their human partners, similar to how people learn from one another through interaction. I will present methods enabling robots to understand language referring expressions like “the heavy, metallic mug”, the first work showing that it is possible to learn to connect words to their perceptual properties in the visual, tactile, and auditory senses of a physical robot. I will also present benchmarks and models for translating high-level human language like “put the toast on the table” that imply latent, intermediate goals into executable sequences of agent actions with the help of low-level, step-by-step language instructions. Finally, I will discuss how my work in grounded language contributes to NLP, robotics, and the broader goals of the AI community.
Jesse Thomason is a postdoctoral researcher at the University of Washington working with Luke Zettlemoyer. He received his PhD from the University of Texas at Austin with Raymond Mooney. His research focuses on language grounding and natural language processing applications for robotics (RoboNLP). Key to this work is using dialog with humans to facilitate both robot task execution and learning to enable lifelong improvement of robots’ language understanding capabilities. He has worked to encourage and promote work in RoboNLP through workshop organization at both NLP and robotics conference venues.