Prof. Stephan Mandt
Day/Time: Tuesday and Thursday 11:00-12:20pm
Location: PCB 1200
Course Code: 34875
Generative models are an important class of machine learning models that aim to learn the data distribution. Deep generative models build on recent advances in the fields of deep learning, and make it possible to sample data that highly resemble the structure of the data on which these models were trained. Recent success stories of deep generative models include Google’s WaveNet which set a new state of the art for voice synthesis, Transformer Networks for highly accurate machine translation, CycleGAN for weakly-supervised style transfer between images or videos, neural compression algorithms that outperform their classical counterparts, and deep generative models for molecular design. This course will introduce students to the probabilistic foundations of deep generative models with an emphasis on variational autoencoders (VAEs), generative adversarial networks (GANs), autoregressive models, and normalizing flows. Advanced topics that will be covered include black-box variational inference, variational dropout, disentangled representations, deep sequential models, alternative variational bounds, and information theoretical perspectives on VAEs. We will discuss diverse applications from the domains of computer vision, speech, NLP, and compression.