Venue
IEEE International Conference on Computer Vision
Domain
computer vision, natural language processing
We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP).Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization.The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes.Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that achieves 84.5% ImageNet zero-shot accuracy in two days.The disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio.Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32 k being sufficient.We release our models at https://github.com/google-research/big_vision and hope our research motivates further explorations in improving the quality and efficiency of language-image pre-training.
The paper introduces a pairwise Sigmoid loss framework for Language-Image Pre-Training (SigLIP) that offers increased efficiency over softmax normalization methods. Key findings include improved performance with smaller batch sizes and the ability to scale with larger batch sizes, achieving 84.5% zero-shot accuracy on ImageNet using a SigLiT model trained on only four TPUv4 chips in two days. The study emphasizes the benefits of sigmoid loss, such as reduced memory usage and simplicity in implementation without needing global normalization. The paper compares the sigmoid loss against traditional softmax loss across various setups and demonstrates that while performance saturates at a reasonable batch size (32k), it allows significant efficiency gains in training. The authors released their models to encourage further research in language-image pre-training methodologies.
This paper employs the following methods:
- Sigmoid Loss
- Contrastive Learning
- Locked-image Tuning
The following datasets were used in this research:
- ImageNet
- LiT
- WebLI
- XM3600
- Achieved 84.5% ImageNet zero-shot accuracy
- 79.7% zero-shot accuracy on ImageNet using SigLiT
- SigLIP achieves 71.0% zero-shot accuracy on ImageNet
The authors identified the following limitations:
- Performance saturates at 32k batch size, diminishing returns observed beyond this point
- Number of GPUs: 4
- GPU Type: TPUv4
sigmoid loss
contrastive learning
language-image pre-training
large batch training