← Back to Papers

When to use Graphs in RAG: A Comprehensive Analysis for Graph Retrieval-Augmented Generation

Zhishang Xiang, Chuan-Yu Wu, Qinggang Zhang, Shengyuan Chen, Zijin Hong, Xiao Huang, Jinsong Su
arXiv.org | 2025
GraphRAG does not universally outperform vanilla RAG, and its benefits are task-dependent; this paper introduces GraphRAG-Bench to systematically identify when and why graph-structured retrieval provides measurable advantages over traditional RAG.

Problem Statement

GraphRAG has gained significant attention as a way to enhance LLMs with structured external knowledge, yet empirical studies increasingly show it underperforms vanilla RAG on many real-world tasks. There is no principled benchmark or systematic analysis to determine which task types and conditions actually benefit from graph structures. Practitioners lack actionable guidelines for deciding when to invest in the overhead of graph construction and retrieval.

Key Novelty

  • Introduction of GraphRAG-Bench, a comprehensive benchmark spanning tasks of increasing difficulty: fact retrieval, complex reasoning, contextual summarization, and creative generation
  • Systematic end-to-end evaluation covering the full GraphRAG pipeline—graph construction, knowledge retrieval, and final generation—rather than isolated components
  • Empirical guidelines identifying the specific conditions (task types, knowledge structure) under which GraphRAG surpasses traditional RAG and the underlying reasons for its success or failure

Evaluation Highlights

  • GraphRAG surpasses vanilla RAG on tasks requiring hierarchical knowledge retrieval and multi-hop deep contextual reasoning, but underperforms on simpler fact retrieval and creative generation tasks
  • Benchmark covers a diverse, multi-difficulty dataset enabling fine-grained comparison across graph construction strategies and retrieval methods within a unified evaluation framework

Breakthrough Assessment

6/10 The paper makes a solid, practically valuable contribution by providing the first systematic benchmark and conditional analysis of GraphRAG, directly challenging overgeneralized claims about its effectiveness. However, it is primarily an empirical evaluation study rather than a new algorithmic advance.

Methodology

  1. Construct GraphRAG-Bench with a multi-difficulty dataset covering four task categories (fact retrieval, complex reasoning, contextual summarization, creative generation) to stress-test graph-based retrieval across diverse scenarios
  2. Implement and evaluate multiple GraphRAG systems end-to-end, comparing graph construction approaches, retrieval strategies, and generation quality against vanilla RAG baselines on the same tasks
  3. Analyze results to identify the conditions—such as hierarchical knowledge structure and multi-hop reasoning requirements—that predict GraphRAG superiority, and synthesize practical deployment guidelines

System Components

GraphRAG-Bench Dataset

A multi-difficulty benchmark dataset with tasks spanning fact retrieval, complex reasoning, contextual summarization, and creative generation to comprehensively stress-test GraphRAG systems

Graph Construction Evaluation Module

Assesses the quality and structure of knowledge graphs built from source documents as the first stage of the pipeline

Knowledge Retrieval Evaluation Module

Measures how effectively graph-based retrieval surfaces relevant, hierarchically structured knowledge compared to dense/sparse vector retrieval in vanilla RAG

End-to-End Generation Assessment

Evaluates final LLM output quality across all task types, enabling attribution of performance differences to specific pipeline stages

Conditional Analysis Framework

Systematic comparison logic that identifies task-type and knowledge-structure conditions under which GraphRAG outperforms or underperforms vanilla RAG

Results

Task Type Vanilla RAG GraphRAG Delta
Hierarchical Knowledge Retrieval Lower accuracy Higher accuracy GraphRAG wins
Multi-hop Complex Reasoning Lower coherence Higher coherence GraphRAG wins
Simple Fact Retrieval Competitive/better Underperforms Vanilla RAG wins
Creative Generation Competitive/better Underperforms Vanilla RAG wins
Contextual Summarization Moderate Moderate-to-better Marginal GraphRAG advantage

Key Takeaways

  • Do not default to GraphRAG for all tasks: it provides clear benefits only when the underlying knowledge is hierarchically structured and the task demands multi-hop or relational reasoning
  • For simple fact retrieval and creative generation tasks, vanilla RAG is likely to be more efficient and equally or more effective, avoiding unnecessary graph construction overhead
  • Use GraphRAG-Bench as a diagnostic tool to profile your specific use case before committing to a graph-based pipeline, and consult the paper's conditional guidelines to match retrieval architecture to task requirements

Abstract

Graph retrieval-augmented generation (GraphRAG) has emerged as a powerful paradigm for enhancing large language models (LLMs) with external knowledge. It leverages graphs to model the hierarchical structure between specific concepts, enabling more coherent and effective knowledge retrieval for accurate reasoning.Despite its conceptual promise, recent studies report that GraphRAG frequently underperforms vanilla RAG on many real-world tasks. This raises a critical question: Is GraphRAG really effective, and in which scenarios do graph structures provide measurable benefits for RAG systems? To address this, we propose GraphRAG-Bench, a comprehensive benchmark designed to evaluate GraphRAG models onboth hierarchical knowledge retrieval and deep contextual reasoning. GraphRAG-Bench features a comprehensive dataset with tasks of increasing difficulty, coveringfact retrieval, complex reasoning, contextual summarization, and creative generation, and a systematic evaluation across the entire pipeline, from graph constructionand knowledge retrieval to final generation. Leveraging this novel benchmark, we systematically investigate the conditions when GraphRAG surpasses traditional RAG and the underlying reasons for its success, offering guidelines for its practical application. All related resources and analyses are collected for the community at https://github.com/GraphRAG-Bench/GraphRAG-Benchmark.

Generated on 2026-04-01 using Claude