Skip to main content

News & Events

ECE Guest Lecturer Series


Natural Language Learning for Human-Robot Collaboration

Professor Matthew Walter, Toyota Technological Institute at Chicago

Wednesday, March 27, 2019
Noon–1 p.m.
Wegman Hall 1400

Abstract: Natural language promises an efficient and flexible means for humans to communicate with robots, whether they are assisting the physically or cognitively impaired, or performing disaster mitigation tasks as our surrogates. Recent advancements have given rise to robots that are able to interpret natural language commands that direct object manipulation and spatial navigation. However, most methods require prior knowledge of the metric and semantic properties of the objects and places that comprise the robot's environment.

In this talk, I will present our work that enables robots to successfully follow natural language navigation instructions within novel, unknown environments. I will first describe a method that treats language as a sensor, exploiting information implicit and explicit in the user's command to learn distributions over the latent spatial and semantic properties of the environment and over the robot's intended behavior. The method then learns a belief space policy that reasons over these distributions to identify suitable navigation actions. In the second part of the talk, I will present an alternative formulation that represents language understanding as a multi-view sequence-to-sequence learning problem. I will introduce an alignment-based neural encoder-decoder architecture that translates free-form instructions to action sequences based on images of the observable world. Unlike previous methods, this architecture uses no specialized linguistic resources and can be trained in a weakly supervised, end-to-end fashion, which allows for generalization to new domains. Time permitting, I will then describe how we can effectively invert this model to enable robots to generate natural language utterances. I will evaluate the efficacy of these methods on a combination of benchmark navigation datasets and through demonstrations on a voice-commandable wheelchair.

Bio: Matthew Walter is an assistant professor at the Toyota Technological Institute at Chicago. His interests revolve around the realization of intelligent, perceptually aware robots that are able to act robustly and effectively in unstructured environments, particularly with and alongside people. His research focuses on machine learning-based solutions that allow robots to learn to understand and interact with the people, places, and objects in their surroundings. Matthew has investigated these areas in the context of various robotic platforms, including autonomous underwater vehicles, self-driving cars, voice-commandable wheelchairs, mobile manipulators, and autonomous cars for (rubber) ducks. Matthew obtained his Ph.D. from the Massachusetts Institute of Technology, where his thesis focused on improving the efficiency of inference for simultaneous localization and mapping.