Venue
International Conference on Computing Communication and Networking Technologies
Domain
computer vision, machine learning, artificial intelligence
Generative Adversarial Networks (GANs) are very popular frameworks for generating high-quality data, and are immensely used in both the academia and industry in many domains.Arguably, their most substantial impact has been in the area of computer vision, where they achieve state-of-the-art image generation.This chapter gives an introduction to GANs, by discussing their principle mechanism and presenting some of their inherent problems during training and evaluation.We focus on these three issues: (1) mode collapse, (2) vanishing gradients, and (3) generation of low-quality images.We then list some architecture-variant and loss-variant GANs that remedy the above challenges.Lastly, we present two utilization examples of GANs for real-world applications: Data augmentation and face images generation.
This paper discusses Generative Adversarial Networks (GANs), providing an overview of their mechanism, advantages, and inherent challenges. It outlines issues such as mode collapse, vanishing gradients, and the generation of low-quality images. The paper also presents variants and improvements to the original GAN architecture and loss functions that address these problems. Additionally, it explores real-world applications of GANs, particularly in data augmentation and the generation of face images.
This paper employs the following methods:
- GAN
- Semi-supervised GAN (SGAN)
- Conditional GAN (CGAN)
- Deep Convolutional GAN (DCGAN)
- Progressive GAN (PROGAN)
- BigGAN
- StyleGAN
- Wasserstein GAN (WGAN)
- Self Supervised GAN (SSGAN)
- Spectral Normalization GAN (SNGAN)
- SphereGAN
- GAN
- SGAN
- CGAN
- DCGAN
- PROGAN
- BigGAN
- StyleGAN
- WGAN
- SSGAN
- SNGAN
- SphereGAN
The following datasets were used in this research:
- KL divergence
- Jensen-Shannon divergence
- Wasserstein distance
- Improved stability in GAN training
- High-quality image generation
- Addressed issues of mode collapse and vanishing gradients
The authors identified the following limitations:
- Instability in training
- Mode collapse
- Low-quality image generation
- Number of GPUs: None specified
- GPU Type: None specified