← ML Research Wiki / 2303.15343

Sigmoid Loss for Language Image Pre-Training

Xiaohua Zhai [email protected] ZürichSwitzerland, Basil Mustafa [email protected] ZürichSwitzerland, Alexander Kolesnikov [email protected] ZürichSwitzerland, Lucas Beyer [email protected] ZürichSwitzerland, Google Deepmind ZürichSwitzerland (2023)

Paper Information
arXiv ID
Venue
IEEE International Conference on Computer Vision
Domain
computer vision, natural language processing
Code
Reproducibility
8/10

Abstract

We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP).Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization.The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes.Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that achieves 84.5% ImageNet zero-shot accuracy in two days.The disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio.Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32 k being sufficient.We release our models at https://github.com/google-research/big_vision and hope our research motivates further explorations in improving the quality and efficiency of language-image pre-training.

Summary

The paper introduces a pairwise Sigmoid loss framework for Language-Image Pre-Training (SigLIP) that offers increased efficiency over softmax normalization methods. Key findings include improved performance with smaller batch sizes and the ability to scale with larger batch sizes, achieving 84.5% zero-shot accuracy on ImageNet using a SigLiT model trained on only four TPUv4 chips in two days. The study emphasizes the benefits of sigmoid loss, such as reduced memory usage and simplicity in implementation without needing global normalization. The paper compares the sigmoid loss against traditional softmax loss across various setups and demonstrates that while performance saturates at a reasonable batch size (32k), it allows significant efficiency gains in training. The authors released their models to encourage further research in language-image pre-training methodologies.

Methods

This paper employs the following methods:

  • Sigmoid Loss
  • Contrastive Learning
  • Locked-image Tuning

Models Used

  • SigLiT
  • SigLIP

Datasets

The following datasets were used in this research:

  • ImageNet
  • LiT
  • WebLI
  • XM3600

Evaluation Metrics

  • Zero-shot accuracy

Results

  • Achieved 84.5% ImageNet zero-shot accuracy
  • 79.7% zero-shot accuracy on ImageNet using SigLiT
  • SigLIP achieves 71.0% zero-shot accuracy on ImageNet

Limitations

The authors identified the following limitations:

  • Performance saturates at 32k batch size, diminishing returns observed beyond this point

Technical Requirements

  • Number of GPUs: 4
  • GPU Type: TPUv4

Keywords

sigmoid loss contrastive learning language-image pre-training large batch training

Papers Using Similar Methods

External Resources