← ML Research Wiki / 2203.00667

Generative Adversarial Networks

Gilad Cohen, Raja Giryes (2021)

Paper Information
arXiv ID
Venue
International Conference on Computing Communication and Networking Technologies
Domain
computer vision, machine learning, artificial intelligence
SOTA Claim
Yes

Abstract

Generative Adversarial Networks (GANs) are very popular frameworks for generating high-quality data, and are immensely used in both the academia and industry in many domains.Arguably, their most substantial impact has been in the area of computer vision, where they achieve state-of-the-art image generation.This chapter gives an introduction to GANs, by discussing their principle mechanism and presenting some of their inherent problems during training and evaluation.We focus on these three issues: (1) mode collapse, (2) vanishing gradients, and (3) generation of low-quality images.We then list some architecture-variant and loss-variant GANs that remedy the above challenges.Lastly, we present two utilization examples of GANs for real-world applications: Data augmentation and face images generation.

Summary

This paper discusses Generative Adversarial Networks (GANs), providing an overview of their mechanism, advantages, and inherent challenges. It outlines issues such as mode collapse, vanishing gradients, and the generation of low-quality images. The paper also presents variants and improvements to the original GAN architecture and loss functions that address these problems. Additionally, it explores real-world applications of GANs, particularly in data augmentation and the generation of face images.

Methods

This paper employs the following methods:

  • GAN
  • Semi-supervised GAN (SGAN)
  • Conditional GAN (CGAN)
  • Deep Convolutional GAN (DCGAN)
  • Progressive GAN (PROGAN)
  • BigGAN
  • StyleGAN
  • Wasserstein GAN (WGAN)
  • Self Supervised GAN (SSGAN)
  • Spectral Normalization GAN (SNGAN)
  • SphereGAN

Models Used

  • GAN
  • SGAN
  • CGAN
  • DCGAN
  • PROGAN
  • BigGAN
  • StyleGAN
  • WGAN
  • SSGAN
  • SNGAN
  • SphereGAN

Datasets

The following datasets were used in this research:

  • MNIST
  • ImageNet
  • CelebA

Evaluation Metrics

  • KL divergence
  • Jensen-Shannon divergence
  • Wasserstein distance

Results

  • Improved stability in GAN training
  • High-quality image generation
  • Addressed issues of mode collapse and vanishing gradients

Limitations

The authors identified the following limitations:

  • Instability in training
  • Mode collapse
  • Low-quality image generation

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Papers Using Similar Methods

External Resources