← ML Research Wiki / 2308.06721

IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models

Hu Ye [email protected] Tencent AI Lab, Jun Zhang [email protected] Tencent AI Lab, Sibo Liu [email protected] Tencent AI Lab, Xiao Han Tencent AI Lab, Wei Yang [email protected] Tencent AI Lab (2023)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
Not specified

Abstract

Recent years have witnessed the strong power of large text-to-image diffusion models for the impressive generative capability to create high-fidelity images.However, it is very tricky to generate desired images using only text prompt as it often involves complex prompt engineering.An alternative to text prompt is image prompt, as the saying goes: "an image is worth a thousand words".Although existing methods of direct fine-tuning from pretrained models are effective, they require large computing resources and are not compatible with other base models, text prompt, and structural controls.In this paper, we present IP-Adapter, an effective and lightweight adapter to achieve image prompt capability for the pretrained text-to-image diffusion models.The key design of our IP-Adapter is decoupled cross-attention mechanism that separates cross-attention layers for text features and image features.Despite the simplicity of our method, an IP-Adapter with only 22M parameters can achieve comparable or even better performance to a fully fine-tuned image prompt model.As we freeze the pretrained diffusion model, the proposed IP-Adapter can be generalized not only to other custom models fine-tuned from the same base model, but also to controllable generation using existing controllable tools.With the benefit of the decoupled cross-attention strategy, the image prompt can also work well with the text prompt to achieve multimodal image generation.The project page is available at https://ip-adapter.github.io.

Summary

The paper presents IP-Adapter, a lightweight and effective image prompt adapter designed for pretrained text-to-image diffusion models. It utilizes a decoupled cross-attention mechanism to separately handle text and image features, allowing for high-quality image generation using image prompts without the drawbacks of traditional fine-tuning methods. IP-Adapter, with only 22M parameters, achieves performance comparable to fully fine-tuned models, retains the ability to generate images with both text and image prompts, and allows for easy integration with existing controllable tools. The method was evaluated quantitatively and qualitatively, showing superior image quality and alignment with both image and multimodal prompts. The authors highlight the generalization capabilities of IP-Adapter to other models fine-tuned from the same base and emphasize that the adapter is reusable and compatible with additional control methods like ControlNet.

Methods

This paper employs the following methods:

  • Decoupled cross-attention

Models Used

  • GLIDE
  • DALL-E 2
  • Imagen
  • Stable Diffusion
  • eDiff-I
  • RAPHAEL

Datasets

The following datasets were used in this research:

  • LAION-2B
  • COYO-700M

Evaluation Metrics

  • CLIP-I
  • CLIP-T

Results

  • IP-Adapter achieves performance comparable to fine-tuned models with only 22M parameters.
  • IP-Adapter allows for multimodal image generation using both text and image prompts.

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: 8
  • GPU Type: V100

External Resources