Venue
IEEE International Conference on Computer Vision
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO testdev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-theart by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. The code and models are publicly available at https://github. com/microsoft/Swin-Transformer.
This paper introduces the Swin Transformer, a new vision Transformer architecture designed to address challenges in adapting Transformer models for computer vision tasks. The Swin Transformer features a hierarchical design with a shifted windowing approach that optimizes efficiency by computing self-attention within non-overlapping local windows while also allowing for connections across windows. The architecture is shown to be flexible, suitable for various scale modeling, and maintains linear computational complexity with respect to the image size. It achieves state-of-the-art performance in image classification (87.3% top-1 accuracy on ImageNet-1K), object detection (58.7 box AP and 51.1 mask AP on COCO test-dev), and semantic segmentation (53.5 mIoU on ADE20K). The systematic improvements in these tasks demonstrate the potential of Transformer-based models in the field of vision, outlining a path towards unified modeling across vision and language domains.
This paper employs the following methods:
- Hierarchical Transformer
- Shifted Window Self-Attention
The following datasets were used in this research:
- Accuracy
- box AP
- mask AP
- mIoU
- 87.3% top-1 accuracy on ImageNet-1K
- 58.7 box AP on COCO test-dev
- 51.1 mask AP on COCO test-dev
- 53.5 mIoU on ADE20K
The authors identified the following limitations:
- Number of GPUs: None specified
- GPU Type: None specified
Transformers
hierarchical architecture
shifted windows
local self-attention