Peize Sun The University of Hong Kong 2 ByteDance, Yi Jiang The University of Hong Kong 2 ByteDance, Shoufa Chen The University of Hong Kong 2 ByteDance, Shilong Zhang The University of Hong Kong 2 ByteDance, Bingyue Peng The University of Hong Kong 2 ByteDance, Ping Luo The University of Hong Kong 2 ByteDance, Zehuan Yuan The University of Hong Kong 2 ByteDance (2024)
This paper presents the Llama model for scalable image generation using autoregressive methods, demonstrating superior performance compared to diffusion models. The authors argue that autoregressive models utilizing next-token prediction are capable of achieving state-of-the-art results in image generation tasks, specifically in class-conditional and text-conditional scenarios. Key contributions include a high-quality image tokenizer with enhanced reconstruction capabilities, a scalable architecture of class-conditional image generation models based on Llama, and methods for improving inference speed using vLLM. They achieve competitive performance on tasks by leveraging large datasets and iterative training strategies. The paper discusses the architecture, training setup, optimization techniques, and results derived from experiments, along with qualitative assessments through generated samples.
This paper employs the following methods:
The following datasets were used in this research:
The authors identified the following limitations: