← ML Research Wiki / 2402.19473

Retrieval-Augmented Generation for AI-Generated Content: A Survey

Penghao Zhao [email protected], Hailin Zhang, Qinhan Yu [email protected], Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang [email protected], Wentao Zhang [email protected], Jie Jiang, Bin Cui [email protected], Qinhan Yu, Fangcheng Fu, Ling YangPenghao Zhao, Hailin Zhang, Zhengren Wang, Yunteng Geng, Wentao Zhang, Peking University (2024)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
Not specified

Abstract

Advancements in model algorithms, the growth of foundational models, and access to high-quality datasets have propelled the evolution of Artificial Intelligence Generated Content (AIGC).Despite its notable successes, AIGC still faces hurdles such as updating knowledge, handling long-tail data, mitigating data leakage, and managing high training and inference costs.Retrieval-Augmented Generation (RAG) has recently emerged as a paradigm to address such challenges.In particular, RAG introduces the information retrieval process, which enhances the generation process by retrieving relevant objects from available data stores, leading to higher accuracy and better robustness.In this paper, we comprehensively review existing efforts that integrate RAG techniques into AIGC scenarios.We first classify RAG foundations according to how the retriever augments the generator, distilling the fundamental abstractions of the augmentation methodologies for various retrievers and generators.This unified perspective encompasses all RAG scenarios, illuminating advancements and pivotal technologies that help with potential future progress.We also summarize additional enhancements methods for RAG, facilitating effective engineering and implementation of RAG systems.Then from another view, we survey on practical applications of RAG across different modalities and tasks, offering valuable references for researchers and practitioners.Furthermore, we introduce the benchmarks for RAG, discuss the limitations of current RAG systems, and suggest potential directions for future research.Github: https://github.com/PKU-DAIR/RAG-Survey.

Summary

This paper provides a comprehensive survey of Retrieval-Augmented Generation (RAG) techniques in AI-generated content (AIGC). It addresses advancements in model algorithms, foundational models, and the use of high-quality datasets contributing to AIGC evolution. The survey discusses the challenges faced by AIGC, including knowledge updating, long-tail data handling, data leakage, and high costs associated with training and inference. To overcome these hurdles, RAG is proposed as an effective paradigm, enhancing generative processes through information retrieval. The authors classify various RAG methodologies based on how retrievers augment generators, summarizing enhancements and practical applications across different modalities. The paper also discusses benchmarks for RAG, limitations of current systems, and suggests directions for future research.

Methods

This paper employs the following methods:

  • RAG
  • Transformer
  • LSTM
  • Diffusion Model
  • GAN
  • Sparse Retriever
  • Dense Retriever
  • kNN
  • Recursive Retrieval
  • Hybrid Retrieval

Models Used

  • GPT
  • LLAMA
  • DALL-E
  • Stable Diffusion
  • VisualGPT
  • Codex

Datasets

The following datasets were used in this research:

  • None specified

Evaluation Metrics

  • None specified

Results

  • Improvement in AIGC performance through RAG implementation
  • Enhanced accuracy and robustness in content generation
  • Broad applicability of RAG across multiple domains and tasks

Limitations

The authors identified the following limitations:

  • Noise in retrieval results
  • Extra overhead on retrieval processes
  • Alignment issues between retrievers and generators
  • Increased system complexity
  • Challenges with lengthy context updates

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Papers Using Similar Methods

External Resources