Domain
Natural Language Processing, Information Retrieval, Data Summarization
The use of retrieval-augmented generation (RAG) to retrieve relevant information from an external knowledge source enables large language models (LLMs) to answer questions over private and/or previously unseen document collections.However, RAG fails on global questions directed at an entire text corpus, such as "What are the main themes in the dataset?", since this is inherently a queryfocused summarization (QFS) task, rather than an explicit retrieval task.Prior QFS methods, meanwhile, fail to scale to the quantities of text indexed by typical RAG systems.To combine the strengths of these contrasting methods, we propose a Graph RAG approach to question answering over private text corpora that scales with both the generality of user questions and the quantity of source text to be indexed.Our approach uses an LLM to build a graph-based text index in two stages: first to derive an entity knowledge graph from the source documents, then to pregenerate community summaries for all groups of closely-related entities.Given a question, each community summary is used to generate a partial response, before all partial responses are again summarized in a final response to the user.For a class of global sensemaking questions over datasets in the 1 million token range, we show that Graph RAG leads to substantial improvements over a naïve RAG baseline for both the comprehensiveness and diversity of generated answers.An open-source, Python-based implementation of both global and local Graph RAG approaches is forthcoming at https://aka.ms/graphrag.Preprint.Under review.
This paper introduces a novel Graph RAG approach to query-focused summarization (QFS), addressing the limitations of existing retrieval-augmented generation (RAG) methods on global questions that require comprehensive understanding of large text corpora. By employing large language models (LLMs), the proposed method constructs a graph-based text index from source documents, generating community summaries that allow for efficient summarization of relevant responses to user queries. The authors demonstrate that their approach significantly outperforms traditional RAG methods in terms of comprehensiveness and diversity of generated answers, while also reducing token costs. They evaluate their method on two datasets, podcast transcripts and news articles, finding that it effectively supports human sensemaking over large collections of documents. The implementation is forthcoming as an open-source tool.
This paper employs the following methods:
- Graph RAG
- community detection
- map-reduce summarization
The following datasets were used in this research:
- Podcast transcripts
- MultiHop-RAG
- Comprehensiveness
- Diversity
- Empowerment
- Directness
- Graph RAG leads to substantial improvements over a naïve RAG baseline for comprehensiveness and diversity of answers
- Outperforms naïve RAG on comprehensiveness and diversity
- Requires fewer context tokens compared to traditional summarization methods
The authors identified the following limitations:
- Evaluation limited to specific datasets and types of queries
- Need for further validation of sensemaking questions and target metrics with end users
- Consideration of costs and practicalities in building graph indices for real-world applications
- Number of GPUs: None specified
- GPU Type: None specified
Graph RAG
Query-focused Summarization
Knowledge Graphs
Large Language Models
Community Detection
Map-Reduce Summarization