Graph Counselor: Adaptive Graph Exploration via Multi-Agent Synergy to Enhance LLM Reasoning
Problem Statement
Existing GraphRAG methods rely on a single agent with fixed iterative patterns, making them unable to adaptively capture multi-level textual, structural, and degree information in graph data. Additionally, preset reasoning schemes prevent dynamic adjustment of reasoning depth and precise semantic correction, limiting factual accuracy in specialized domains. These limitations reduce LLM performance on complex graph reasoning tasks requiring multi-hop inference and nuanced knowledge integration.
Key Novelty
- Adaptive Graph Information Extraction Module (AGIEM): A three-agent collaborative system (Planning, Thought, Execution) that dynamically models complex graph structures and adjusts multi-level information extraction strategies at runtime
- Self-Reflection with Multiple Perspectives (SR) module: Combines self-reflection with backward reasoning to correct semantic inconsistencies and improve accuracy of final reasoning outputs
- Multi-agent synergy for GraphRAG: Replaces monolithic single-agent graph traversal with a specialized, role-separated multi-agent pipeline that adapts reasoning depth to query complexity
Evaluation Highlights
- Graph Counselor outperforms existing GraphRAG baselines across multiple graph reasoning benchmarks, demonstrating higher reasoning accuracy on tasks requiring multi-level dependency modeling
- Improved generalization ability across diverse graph reasoning tasks compared to fixed-pattern single-agent methods, validating the adaptive extraction and reflection mechanisms
Breakthrough Assessment
Methodology
- Step 1 - Adaptive Graph Information Extraction (AGIEM): The Planning Agent decomposes the input query into sub-goals and determines which graph elements to explore; the Thought Agent reasons about multi-level graph structure (textual, structural, degree); the Execution Agent retrieves and aggregates relevant subgraph information accordingly
- Step 2 - Dynamic Reasoning Depth Adjustment: Based on intermediate outputs from the three agents, the system adaptively decides whether further graph traversal is needed, avoiding fixed-depth limitations of prior methods
- Step 3 - Self-Reflection with Multiple Perspectives (SR): The system applies backward reasoning over the generated answer, checking semantic consistency and factual alignment with retrieved graph evidence, then refines the output iteratively until convergence
System Components
Decomposes user queries into structured sub-goals and determines the scope and strategy for graph exploration
Reasons over multi-level graph properties including textual node/edge content, structural topology, and node degree to guide meaningful information extraction
Executes the retrieval and aggregation of graph substructures based on the plans and thoughts provided by the other agents
The unified orchestration module combining the three agents to enable dynamic, multi-level graph traversal and information extraction
Post-generation module that uses backward reasoning and multi-perspective evaluation to detect and correct semantic errors in reasoning outputs
Results
| Metric/Benchmark | Best Baseline | Graph Counselor | Delta |
|---|---|---|---|
| Graph reasoning accuracy (multi-hop) | Competitive single-agent GraphRAG | Higher accuracy | Positive improvement |
| Generalization across graph tasks | Fixed-pattern methods | Superior generalization | Qualitatively better |
| Semantic consistency of outputs | Preset reasoning schemes | Improved via SR module | Reduced semantic errors |
Key Takeaways
- For GraphRAG applications: Replacing single-agent fixed-depth traversal with a role-specialized multi-agent pipeline (Plan→Think→Execute) is a practical architecture pattern for improving knowledge graph reasoning in LLM systems
- For LLM reasoning pipelines: Adding a backward reasoning self-reflection step after generation is a lightweight but effective mechanism to catch and correct semantic inconsistencies, especially in knowledge-intensive tasks
- For ML practitioners: The multi-agent decomposition approach generalizes well across graph reasoning benchmarks, suggesting that task decomposition by information type (textual, structural, degree) is a useful inductive bias when designing retrieval systems over heterogeneous graph data
Abstract
Graph Retrieval Augmented Generation (GraphRAG) effectively enhances external knowledge integration capabilities by explicitly modeling knowledge relationships, thereby improving the factual accuracy and generation quality of Large Language Models (LLMs) in specialized domains. However, existing methods suffer from two inherent limitations: 1) Inefficient Information Aggregation: They rely on a single agent and fixed iterative patterns, making it difficult to adaptively capture multi-level textual, structural, and degree information within graph data. 2) Rigid Reasoning Mechanism: They employ preset reasoning schemes, which cannot dynamically adjust reasoning depth nor achieve precise semantic correction. To overcome these limitations, we propose Graph Counselor, an GraphRAG method based on multi-agent collaboration. This method uses the Adaptive Graph Information Extraction Module (AGIEM), where Planning, Thought, and Execution Agents work together to precisely model complex graph structures and dynamically adjust information extraction strategies, addressing the challenges of multi-level dependency modeling and adaptive reasoning depth. Additionally, the Self-Reflection with Multiple Perspectives (SR) module improves the accuracy and semantic consistency of reasoning results through self-reflection and backward reasoning mechanisms. Experiments demonstrate that Graph Counselor outperforms existing methods in multiple graph reasoning tasks, exhibiting higher reasoning accuracy and generalization ability. Our code is available at https://github.com/gjq100/Graph-Counselor.git.