← ML Research Wiki / 2307.01952

SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis

Dustin Podell Stability AI Applied Research, Zion English Stability AI Applied Research, Kyle Lacey Stability AI Applied Research, Andreas Blattmann Stability AI Applied Research, Tim Dockhorn Stability AI Applied Research, Jonas Müller Stability AI Applied Research, Joe Penna Stability AI Applied Research, Robin Rombach Stability AI Applied Research (2023)

Paper Information
arXiv ID
Venue
International Conference on Learning Representations
Domain
Artificial Intelligence, Computer Vision, Deep Learning, Generative Models
SOTA Claim
Yes
Code
Reproducibility
6/10

Abstract

We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared to previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights. arXiv:2307.01952v1 [cs.CV] 4 Jul 2023 c size = (64, 64) c size = (128, 128), c size = (256, 236), c size = (512, 512), 'A robot painted as graffiti on a brick wall. a sidewalk is in front of the wall, and grass is growing out of cracks in the concrete.''Panda mad scientist mixing sparkling chemicals, artstation.'

Summary

This paper presents SDXL, an enhanced latent diffusion model for text-to-image synthesis, which improves upon previous versions of Stable Diffusion by utilizing a larger UNet backbone, novel conditioning schemes, and a refinement model for better visual fidelity. The authors emphasize the transparency of model training and evaluation and provide access to code and model weights. They report substantial improvements in performance over previous models and competitive results with state-of-the-art black-box image generators. The improvements include a larger UNet, multiple conditioning techniques, and a two-stage image generation pipeline. User studies indicate that SDXL consistently outperforms prior versions, and the paper outlines potential future work to further enhance the model's capabilities, address limitations like text rendering, and reduce inference costs.

Methods

This paper employs the following methods:

  • Latent Diffusion Model (LDM)
  • Diffusion-based Refinement Model
  • Multi-aspect Training
  • Conditioning on Image Size
  • Conditioning on Cropping Parameters

Models Used

  • SDXL
  • Stable Diffusion

Datasets

The following datasets were used in this research:

  • ImageNet
  • COCO

Evaluation Metrics

  • FID
  • IS
  • CLIP Score

Results

  • SDXL outperforms previous versions of Stable Diffusion in user studies
  • Competitive performance with black-box models
  • Improved image quality and detail

Limitations

The authors identified the following limitations:

  • Challenges in synthesizing intricate structures like human hands
  • Lacks perfect photorealism and subtle lighting effects
  • Possibility of introducing social and racial biases from training data
  • Concept bleeding in generated images
  • Difficulty rendering long text accurately

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

SDXL latent diffusion text-to-image high-resolution synthesis conditioning techniques refinement model

Papers Using Similar Methods

External Resources