An Optimal Transportation View of Generative Adversarial Networks


Generative Adversarial Net (GAN) is a powerful machine learning model, and becomes extremely successful recently. The generator and the discriminator in a GAN model competes each other and reaches the Nash equilibrium. GAN can generate samples automatically, therefore reduce the requirements for large amount of training data. It can also model distributions from data samples. In spite of its popularity, GAN model lacks theoretic foundation.

In this talk, we give a geometric interpretation to optimal mass transportation theory, explain the relation with the Monge-Ampere equation, and apply the theory for the GAN model, In more detail, we will discuss the following problems:

  • The real data satisfies the manifold distribution hypothesis
  • In GANs, the generator G computes an optimal transportation map, which is equivalent to the Brenier potential
  • According to the regularity theory of Monge-Ampere equation, the transportation maps are not continuous
  • Based on this theoretic interpretation, we propose an Autoencoder-Optimal Transportation map (AE-OT) framework, which is partially transparent, and outperforms the state of the arts.

    David Gu

    Dr. Gu is a tenured associate professor in Computer Science department, also affliated with applied mathematics department in the State University of New York at Stony Brook. Dr. Gu is also an affiliated professor in the Center of Mathematical Science and Applications of Harvard University
    • Date: Feb 28, 18:00 PST
    • Fee: Free
    • Available Seats: 48