Jie Hu [email protected] University of Oxford Momenta, Momenta, Li Shen [email protected] University of Oxford Momenta, Momenta, Gang Sun [email protected] University of Oxford Momenta, Momenta, Jie Hu [email protected] University of Oxford Momenta, Momenta, Li Shen [email protected] University of Oxford Momenta, Momenta, Gang Sun [email protected] University of Oxford Momenta, Momenta (2017)
This paper presents the Squeeze-and-Excitation Networks (SENets), a novel architectural unit that enhances convolutional neural networks (CNNs) by adaptively recalibrating channel-wise feature responses through a process called the Squeeze-and-Excitation (SE) block. The SE block improves representational power by explicitly modeling interdependencies among channels, yielding networks that generalize well across various datasets. The authors show that stacking SE blocks leads to performance improvements at a minimal computational cost, as demonstrated in their successful submission to the ILSVRC 2017 competition, where they achieved a top-5 error of 2.251%, a 25% relative improvement over the previous year. Extensive evaluations on the ImageNet 2012 dataset highlight the effectiveness of SENets, showing consistent performance gains across various architectures such as SE-ResNet, SE-Inception, and SE-ResNeXt, confirming their broad applicability in CNN designs.
This paper employs the following methods:
The following datasets were used in this research:
The authors identified the following limitations: