Advancements in model algorithms, the growth of foundational models, and access to high-quality datasets have propelled the evolution of Artificial Intelligence Generated Content (AIGC).Despite its notable successes, AIGC still faces hurdles such as updating knowledge, handling long-tail data, mitigating data leakage, and managing high training and inference costs.Retrieval-Augmented Generation (RAG) has recently emerged as a paradigm to address such challenges.In particular, RAG introduces the information retrieval process, which enhances the generation process by retrieving relevant objects from available data stores, leading to higher accuracy and better robustness.In this paper, we comprehensively review existing efforts that integrate RAG techniques into AIGC scenarios.We first classify RAG foundations according to how the retriever augments the generator, distilling the fundamental abstractions of the augmentation methodologies for various retrievers and generators.This unified perspective encompasses all RAG scenarios, illuminating advancements and pivotal technologies that help with potential future progress.We also summarize additional enhancements methods for RAG, facilitating effective engineering and implementation of RAG systems.Then from another view, we survey on practical applications of RAG across different modalities and tasks, offering valuable references for researchers and practitioners.Furthermore, we introduce the benchmarks for RAG, discuss the limitations of current RAG systems, and suggest potential directions for future research.Github: https://github.com/PKU-DAIR/RAG-Survey.
This paper provides a comprehensive survey of Retrieval-Augmented Generation (RAG) techniques in AI-generated content (AIGC). It addresses advancements in model algorithms, foundational models, and the use of high-quality datasets contributing to AIGC evolution. The survey discusses the challenges faced by AIGC, including knowledge updating, long-tail data handling, data leakage, and high costs associated with training and inference. To overcome these hurdles, RAG is proposed as an effective paradigm, enhancing generative processes through information retrieval. The authors classify various RAG methodologies based on how retrievers augment generators, summarizing enhancements and practical applications across different modalities. The paper also discusses benchmarks for RAG, limitations of current systems, and suggests directions for future research.
This paper employs the following methods:
- RAG
- Transformer
- LSTM
- Diffusion Model
- GAN
- Sparse Retriever
- Dense Retriever
- kNN
- Recursive Retrieval
- Hybrid Retrieval
- GPT
- LLAMA
- DALL-E
- Stable Diffusion
- VisualGPT
- Codex
The following datasets were used in this research:
- Improvement in AIGC performance through RAG implementation
- Enhanced accuracy and robustness in content generation
- Broad applicability of RAG across multiple domains and tasks
The authors identified the following limitations:
- Noise in retrieval results
- Extra overhead on retrieval processes
- Alignment issues between retrievers and generators
- Increased system complexity
- Challenges with lengthy context updates
- Number of GPUs: None specified
- GPU Type: None specified