← ML Research Wiki / 1411.4038

Fully Convolutional Networks for Semantic Segmentation

Jonathan Long [email protected] UC Berkeley, Evan Shelhamer [email protected] UC Berkeley, Trevor Darrell [email protected] UC Berkeley (2014)

Paper Information
arXiv ID
Venue
Computer Vision and Pattern Recognition
Domain
Computer vision
SOTA Claim
Yes
Code
Available
Reproducibility
8/10

Abstract

Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixelsto-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [17], the VGG net [28], and GoogLeNet [29]) into fully convolutional networks and transfer their learned representations by fine-tuning [2] to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.

Summary

This paper presents Fully Convolutional Networks (FCNs) for semantic segmentation, showcasing their ability to exceed state-of-the-art performance without the need for additional machinery. The authors explain how FCNs can be adapted from existing classification networks (such as AlexNet, VGG Net, and GoogLeNet) and how a novel architecture is created to combine deep, coarse semantic information with shallow, fine appearance information. Their FCNs yield efficient pixel-wise predictions and improve segmentation results across various datasets including PASCAL VOC, NYUDv2, and SIFT Flow. The paper details their methodology for training, including the use of in-network upsampling layers and a unique skip architecture that enhances output detail by merging predictions from various layers. Experimental results confirm significant improvements in segmentation quality and a reduction in inference time when compared to previous models.

Methods

This paper employs the following methods:

  • Fully Convolutional Networks
  • Fine-tuning
  • Transfer Learning
  • Skip Architecture

Models Used

  • AlexNet
  • VGG Net
  • GoogLeNet

Datasets

The following datasets were used in this research:

  • PASCAL VOC
  • NYUDv2
  • SIFT Flow

Evaluation Metrics

  • Mean Intersection over Union (mean IU)
  • Pixel Accuracy
  • Mean Accuracy
  • Frequency Weighted IU

Results

  • State-of-the-art segmentation results on PASCAL VOC 2011-2012 with a mean IU of 66.9% on validation
  • Improvement of 20% relative to previous state-of-the-art results
  • Inference speed of approximately one third of a second per image

Limitations

The authors identified the following limitations:

  • Increased complexity with skip architectures may lead to diminished returns in performance gains
  • Fine-tuning requires significant computational resources and time efforts due to large model sizes

Technical Requirements

  • Number of GPUs: 1
  • GPU Type: NVIDIA Tesla K40c

Keywords

fully convolutional networks semantic segmentation deep neural networks layer fusion end-to-end learning

Papers Using Similar Methods

External Resources