← Back to Papers

Difficulty-Aware Agentic Orchestration for Query-Specific Multi-Agent Workflows

Jinwei Su, Qizhen Lan, Yinghui Xia, Lifan Sun, Weiyou Tian, Tianyu Shi, Xinyuan Song, Lewei He, JingSong Yang
2025
DAAO (Difficulty-Aware Agentic Orchestration) dynamically generates query-specific multi-agent workflows by predicting query difficulty, enabling adaptive resource allocation across heterogeneous LLMs to balance performance and efficiency.

Problem Statement

Existing multi-agent frameworks rely on static or task-level workflows that inefficiently allocate compute — wasting resources on simple queries while underperforming on complex ones. They also fail to account for efficiency-performance trade-offs when routing across heterogeneous LLMs of varying capability and cost. This results in suboptimal systems that are neither cost-effective nor maximally accurate.

Key Novelty

  • VAE-based difficulty estimation that predicts query complexity before workflow generation, enabling pre-emptive resource allocation decisions
  • A self-adjusting policy that updates difficulty estimates based on observed workflow success/failure, creating a feedback loop for continuous calibration
  • A unified framework integrating difficulty estimation, modular operator allocation, and cost/performance-aware LLM routing into a single interdependent system

Evaluation Highlights

  • DAAO surpasses prior multi-agent systems in accuracy across six diverse benchmarks, demonstrating broad generalization
  • DAAO achieves superior inference efficiency compared to baselines, validating that difficulty-aware routing reduces unnecessary computation on simpler queries

Breakthrough Assessment

6/10 DAAO presents a well-integrated and practically motivated system combining difficulty estimation with dynamic workflow generation, which is a meaningful advance over static multi-agent pipelines. However, the individual components (VAEs, LLM routing, modular operators) are not individually novel, and the contribution is primarily architectural/systems-level rather than a fundamental methodological breakthrough.

Methodology

  1. Step 1 — Difficulty Estimation: A VAE encodes the incoming query into a latent representation and predicts a difficulty score, which determines the complexity of the workflow to be assembled.
  2. Step 2 — Workflow Assembly: A modular operator allocator selects and composes the appropriate reasoning operators (e.g., decomposition, verification, reflection) based on the predicted difficulty, constructing a query-specific multi-agent workflow.
  3. Step 3 — LLM Routing & Feedback: A cost- and performance-aware LLM router assigns heterogeneous LLMs to workflow stages; after execution, the self-adjusting policy updates difficulty estimates based on whether the workflow succeeded, refining future orchestration decisions.

System Components

VAE Difficulty Estimator

Uses a variational autoencoder to encode query semantics and predict a continuous difficulty score, enabling pre-workflow resource planning

Modular Operator Allocator

Selects and composes modular reasoning operators (e.g., decomposition, critique, verification) into a query-specific workflow based on the difficulty estimate

Cost- and Performance-Aware LLM Router

Assigns the most appropriate LLM (balancing capability and cost) to each stage of the workflow, optimizing the efficiency-performance trade-off across heterogeneous models

Self-Adjusting Policy

A feedback mechanism that updates query difficulty estimates based on workflow execution outcomes, enabling the system to calibrate over time and correct initial mispredictions

Results

Metric/Benchmark Best Prior Multi-Agent Baseline DAAO Delta
Accuracy (avg. across 6 benchmarks) Competitive prior SOTA Higher accuracy Positive improvement
Inference Efficiency Static/task-level workflow cost Reduced compute on easy queries Lower cost, fewer tokens
Simple Query Handling Over-processed (full pipeline) Lightweight workflow assigned Efficiency gain
Complex Query Handling Underpowered (fixed shallow pipeline) Rich multi-step workflow Accuracy gain

Key Takeaways

  • Query difficulty estimation before workflow construction is a powerful primitive for multi-agent systems — routing compute proportionally to query complexity is more effective than uniform pipeline application
  • Feedback-driven difficulty calibration (updating estimates based on success/failure) is a lightweight but impactful mechanism that practitioners can add to existing agentic pipelines without major architectural overhaul
  • Heterogeneous LLM routing guided by both cost and performance is practically important for production deployments — using powerful models only when warranted by query difficulty can significantly reduce inference costs without sacrificing accuracy

Abstract

Large Language Model (LLM)-based agentic systems have shown strong capabilities across various tasks. However, existing multi-agent frameworks often rely on static or task-level workflows, which either over-process simple queries or underperform on complex ones, while also neglecting the efficiency-performance trade-offs across heterogeneous LLMs. To address these limitations, we propose Difficulty-Aware Agentic Orchestration (DAAO), which can dynamically generate query-specific multi-agent workflows guided by predicted query difficulty. DAAO comprises three interdependent modules: a variational autoencoder (VAE) for difficulty estimation, a modular operator allocator, and a cost- and performance-aware LLM router. A self-adjusting policy updates difficulty estimates based on workflow success, enabling simpler workflows for easy queries and more complex strategies for harder ones. Experiments on six benchmarks demonstrate that DAAO surpasses prior multi-agent systems in both accuracy and inference efficiency, validating its effectiveness for adaptive, difficulty-aware reasoning.

Generated on 2026-03-02 using Claude