Everything About GAN:

Sunil Kumar
5 min readApr 29, 2021

What is GAN?

GAN are neural networks that are trained in an adversarial manner to generate data distribution that resemble with original data distribution. It comprises of two neural networks one Generator(G) and another Discriminator(D).

Discriminator(D) is used to discriminate two different classes of data whereas Generator(G) is used to generate artificial samples that is close to real sample and it fool the discriminator as it fails to distinguish between fake and original data distribution.

Discriminator(D) predict the value between [0,1].

GAN MODEL

Learning Mechanism:

Generator(G) take input from random sample and produce fake data distribution such that it is close to original data. Now, this fake sample along with original sample are fed into discriminator(D), distinguish fake sample with real sample while predicting the value lies between [0,1].

If the discriminator failed to distinguish between fake and real sample then during the backpropagation weight and bias of discriminator are updated while generator(G) parameters are held constant. Once the discriminator is trained enough to distinguish between fake image and real image then generator(G) parameters are updated (discriminator parameters are held constant) until fake images resemble with original image to fool the discriminator.

Optimal value of discriminator equals to 0.5 then it failed to distinguish between real and fakes sample.

Loss function of GAN:

GAN architecture Flow diagram

p(data)(x): probability distribution of original dataset.

Z: probability distribution of random sample.

G(Z) : Generated sample

Let’s calculate binary cross entropy function:

y : original image input to neural network.

y hat : reconstructed image.

a) Label for data coming from original dataset(p(data)(x)) is y=1 and y hat = D(x). Substitute value in loss function —

b) Label for data coming from reconstructed image p(z) is y = 0 and y hat equals to D(G(x)). Substitute value in loss function —

Objective of discriminator is to correctly distinguish between fake and real dataset. Therefore we need to maximize the loss function(both A and B).

Graphically, visualize the above function.

Objective of generator is to fool the discriminator as:

The data point coming from generated images should be predicted as real by discriminator (D(G(z))=1). Therefore we need to minimize the loss function.

Overall loss function maximized for discriminator D and minimized for generator G.

Above loss function equation is valid for single instance of X. When we consider all instances of X then cost function appears as follow:

Training criterion for the discriminator D, given any generator G is to maximized D. Hence optimal discriminator for given G is denoted as

Now,

replace the above value in cost function. Cost function appears to be :

Optimal D* for a given G is obtained by maximizing the above equation that is

Derivative(V(G,D))

Now, substitute optimal value of discriminator (D*) at given generator(G) in

The global minimum criterion G* is achieved if and only if p

i.e Data distribution of generated sample resemble with Original sample.

Let’s understand Adversarial loss:

We call one generator G, and have it convert images from the X domain to the Y domain.

Each generator has a corresponding discriminator, which attempts to tell apart its synthesized images from real ones.

Formulation of Adversarial loss-

Here, we can consider 1(Y=1) as it is coming from original data distribution.

Problems in GAN:

  1. Vanishing Gradient : Derivatives with model parameters (weight (W) and bias (B)) becomes close to zero during the initial start of training process.
  2. Hard to achieve Nash equilibrium : In GAN, each neural network update their cost function with no relation with each other. That is discriminator is updating its cost with no relation with generator and generator is updating its cost with no relation with discriminator. As a result of which gradient of both the models cannot concurrently guarantee a convergence.

--

--