← ML Research Wiki / 2401.03048

Latte: Latent Diffusion Transformer for Video Generation

Xin Ma Department of Data Science & AI Faculty of Information Technology Monash University Australia Shanghai Artificial Intelligence Laboratory China, Yaohui Wang [email protected] Shanghai Artificial Intelligence Laboratory China, Gengyun Jia Nanjing University of Posts and Telecommunications China, Xinyuan Chen Shanghai Artificial Intelligence Laboratory China, Ziwei Liu S-Lab Nanyang Technological University Singapore, Yuan-Fang Li Department of Data Science & AI Faculty of Information Technology Monash University Australia, Cunjian Chen Department of Data Science & AI Faculty of Information Technology Monash University Australia, Yu Qiao Shanghai Artificial Intelligence Laboratory China (2024)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
computer vision, machine learning
SOTA Claim
Yes
Reproducibility
7/10

Abstract

We propose a novel Latent Diffusion Transformer, namely Latte, for video generation.Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space.In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos.To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies.Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD.In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models.We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.Project page: https://maxin-cn.github.io/latteproject.

Summary

The paper introduces Latte, a Latent Diffusion Transformer designed for video generation, which extracts spatio-temporal tokens from input videos and utilizes Transformer blocks to model video distribution in latent space. It includes four efficient model variants that optimize spatial and temporal information processing. The authors provide a thorough evaluation demonstrating that Latte achieves state-of-the-art performance across four video generation datasets: FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. Additionally, Latte is adapted for text-to-video generation, providing competitive results against existing models. The research highlights optimal practices for improving video generation quality, including variations in model architecture and learning strategies.

Methods

This paper employs the following methods:

  • Latent Diffusion Model
  • Transformer
  • Variational Autoencoder

Models Used

  • Latte

Datasets

The following datasets were used in this research:

  • FaceForensics
  • SkyTimelapse
  • UCF101
  • Taichi-HD

Evaluation Metrics

  • Fréchet Video Distance
  • Fréchet Inception Distance
  • Inception Score

Results

  • Achieves state-of-the-art performance across multiple datasets
  • Implements effective model variants for video processing

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

Latent Diffusion Transformer Video Generation Diffusion Model Transformers

Papers Using Similar Methods

External Resources