← ML Research Wiki / 2406.06525

Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation

Peize Sun The University of Hong Kong 2 ByteDance, Yi Jiang The University of Hong Kong 2 ByteDance, Shoufa Chen The University of Hong Kong 2 ByteDance, Shilong Zhang The University of Hong Kong 2 ByteDance, Bingyue Peng The University of Hong Kong 2 ByteDance, Ping Luo The University of Hong Kong 2 ByteDance, Zehuan Yuan The University of Hong Kong 2 ByteDance (2024)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
computer vision
SOTA Claim
Yes
Code
Reproducibility
8/10

Abstract

Codes and models: https://github.com/FoundationVision/LlamaGenFigure 1: Image generation with vanilla autoregressive models.We show samples from our class-conditional image (top row) and text-conditional image (bottom row) generation models.

Summary

This paper presents the Llama model for scalable image generation using autoregressive methods, demonstrating superior performance compared to diffusion models. The authors argue that autoregressive models utilizing next-token prediction are capable of achieving state-of-the-art results in image generation tasks, specifically in class-conditional and text-conditional scenarios. Key contributions include a high-quality image tokenizer with enhanced reconstruction capabilities, a scalable architecture of class-conditional image generation models based on Llama, and methods for improving inference speed using vLLM. They achieve competitive performance on tasks by leveraging large datasets and iterative training strategies. The paper discusses the architecture, training setup, optimization techniques, and results derived from experiments, along with qualitative assessments through generated samples.

Methods

This paper employs the following methods:

  • Autoregressive Model
  • Transformer

Models Used

  • Llama

Datasets

The following datasets were used in this research:

  • ImageNet
  • LAION-COCO

Evaluation Metrics

  • FID
  • Inception Score
  • sFID
  • Precision
  • Recall

Results

  • Outperformed popular diffusion models with 2.18 FID on ImageNet
  • 326% - 414% speedup in inference using vLLM

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: NVIDIA A100 80GB

Keywords

autoregressive models image generation diffusion models image tokenizer large language models

Papers Using Similar Methods

External Resources