Prof. Stephan Mandt
Day/Time: Tuesday and Thursday 11:00-12:20pm
Location: PCB 1200
Course Code: 34875
Generative models are an important class of machine learning models due to their ability to produce artificial data. Deep generative models build on recent advances in the fields of deep learning and approximate inference, and make it possible to create structured data that highly resemble the data on which they were trained, such as images, audio, text, or video. This course will introduce students to the probabilistic foundations of deep generative models with an emphasis on variational autoencoders (VAEs), generative adversarial networks (GANs), and the training paradigm of black box variational inference. Advanced topics that will be covered include normalizing flows, variational dropout, disentangled representations, deep sequential models, alternative variational bounds, and information theoretical perspectives on VAEs. We will discuss diverse applications from the domains of computer vision, speech, NLP, and compression.