← ML Research Wiki / 1502.03167

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Ioffe Sergey Christian Szegedy Google Inc, Inc Google Christian Szegedy Google Inc (2015)

Paper Information
arXiv ID
Venue
International Conference on Machine Learning
Domain
Artificial Intelligence / Machine Learning / Computer Vision
SOTA Claim
Yes
Reproducibility
7/10

Abstract

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change.This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities.We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch.Batch Normalization allows us to use much higher learning rates and be less careful about initialization.It also acts as a regularizer, in some cases eliminating the need for Dropout.Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.Using an ensemble of batchnormalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

Summary

The paper discusses Batch Normalization (BN), a technique designed to accelerate the training of deep neural networks by addressing the issue of internal covariate shift, which occurs as the distribution of each layer's inputs changes during training. By normalizing the inputs for each mini-batch, BN allows for higher learning rates and faster training while acting as a regularizer that can even replace the need for Dropout in some cases. The authors demonstrate that applying BN to a state-of-the-art image classification model can significantly reduce the number of training steps required to achieve high accuracy on tasks such as ImageNet classification, achieving a top-5 validation error of 4.9%. The mechanism of BN is also detailed, including the use of learnable parameters for scaling and shifting the normalized values, which preserves network capacity and prevents saturation in nonlinear activation functions. The approach is empirically validated through experiments on datasets like MNIST and ImageNet, showing improved performance and convergence in training deep networks.

Methods

This paper employs the following methods:

  • Batch Normalization

Models Used

  • Inception

Datasets

The following datasets were used in this research:

  • ImageNet
  • MNIST

Evaluation Metrics

  • Top-5 validation error
  • Accuracy

Results

  • Achieved 4.9% top-5 validation error on ImageNet
  • Matched Inception performance using only 7% of the training steps
  • Eliminated the need for Dropout

Limitations

The authors identified the following limitations:

  • Increased number of parameters by 25% and computational cost by 30%

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

batch normalization deep learning neural networks internal covariate shift training acceleration

Papers Using Similar Methods

External Resources