Introduction to Deep Generative Modeling
Dates
This course is held as part of the 2021 Spring School on Models and Data, University of South Carolina. I thank the organizers for the kind invitation.
Description
Deep generative models (DGM) are neural networks with many hidden layers trained to approximate complicated, high-dimensional probability distributions from a finite number of samples. When trained successfully, we can use the DGMs to estimate the likelihood of each observation and to create new samples from the underlying distribution. Developing DGMs has become one of the most hotly researched fields in artificial intelligence in recent years. The literature on DGMs has become vast and is growing rapidly.
Some advances have even reached the public sphere, for example, the recent successes in generating realistic-looking images, voices, or movies; so-called deep fakes.
Despite these successes, several mathematical and practical issues limit the broader use of DGMs: given a specific dataset, it remains challenging to design and train a DGM and even more challenging to find out why a particular model is or is not effective. To help students contribute to this field, this mini course provides an introduction to DGMs and provides a concise mathematical framework.
The course will consist of three 45 minutes blocks each devoted to one of the three most popular approaches: normalizing flows (NF), variational autoencoders (VAE), and generative adversarial networks (GAN). In addition to a mathematical formulation, each block illustrate the advantages and disadvantages of the approaches using numerical experiments that students can use to deepen their understanding after the course. Our goal is to enable and motivate the reader to contribute to this proliferating research area.
Prerequisites
In order to perform their own numerical experiments and fully benefit from this workshop, participants should have some programming experience in Python and be familiar with Github or Google Colab.
Materials
Recordings:
- Part 1: Introduction and Normalizing Flows
- Part 2: Variational Autoencoders
- Part 3: Generative Adversarial Networks
Interactive iPython notebooks: