Host: Marynel Vazquez
Coffee available at 10:15
Title: Data-driven Computational Studio
Human life is composed of moments and experiences. Imagine going back in time and revisiting crucial moments of your life, such as your wedding ceremony, or the first birthday of your child, as if you are present at the moment; Or picture Abraham Lincoln delivering the Gettysburg Address and telling you the story of the American Civil War. These phenomena are currently limited to human imagination alone. A computational machinery that can understand and create a four-dimensional audio-visual world can enable humans to realize their vision and share it with others. In this talk, I will introduce my research on the Computational Studio that allows average humans to relive old memories like a virtual time travel, automatically create new experiences, and express themselves using everyday computational devices audio-visually. In the first part of the talk, I will describe my work on capturing and browsing 4D audio-visual world and efforts on building a multi-robot capture system. The applications of this work transcend virtualized reality to digitize intangible cultural heritage, capture tribal dances and wildlife in natural environment, and understand the social behavior of human beings. I will then present my research on synthesizing the audio-visual world in an unsupervised manner. Finally, I will demonstrate the importance of thinking about a human user and computational devices when designing content creation applications.
Aayush Bansal is a Ph.D. candidate at the Robotics Institute of Carnegie Mellon University, where he is advised by Prof Deva Ramanan and Prof Yaser Sheikh. He is a Presidential Fellow at CMU, and a recipient of Uber Presidential Fellowship (2016-17), Qualcomm Fellowship (2017-18), and Snap Fellowship (2019-20). Various national and international media such as NBC, CBS, France TV, and The Journalist have extensively covered his work. More details are here: http://www.cs.cmu.edu/~aayushb.