← Back to Papers

LEAD: Iterative Data Selection for Efficient LLM Instruction Tuning

Xiaotian Lin, Yanlin Qi, Yizhang Zhu, Themis Palpanas, Chengliang Chai, Nan Tang, Yuyu Luo
arXiv.org | 2025
LEAD is an efficient iterative data selection framework for LLM instruction tuning that estimates sample utility entirely within the standard training loop, eliminating costly full-dataset inference passes between iterations. It combines a novel uncertainty metric with a coarse-to-fine bandit-based selection strategy to achieve state-of-the-art performance using only 2.5% of training data.

Problem Statement

Instruction tuning quality depends heavily on data selection, but existing iterative model-aware selection methods require repeated full-dataset inference passes to estimate sample utility, creating prohibitive computational overhead that scales poorly with dataset size. This bottleneck makes iterative data-centric approaches impractical for large-scale LLM training despite their superior performance. A method that maintains the benefits of adaptive, model-aware selection without the inference overhead is needed.

Key Novelty

  • Instance-Level Dynamic Uncertainty (IDU): a theoretically grounded utility function that combines instantaneous training loss, gradient-based loss change approximation, and exponential smoothing of historical loss signals—computed entirely within the training loop at zero additional inference cost
  • Two-stage coarse-to-fine selection strategy that uses a multi-armed bandit mechanism to adaptively prioritize informative data clusters before applying fine-grained IDU-based selection, enabling scalability to large datasets
  • Elimination of the post-iteration full-dataset inference bottleneck by reformulating utility estimation as an online, in-loop computation, achieving 5-10x training time reduction while improving model performance

Evaluation Highlights

  • LEAD improves average model performance by 6.1%–10.8% over state-of-the-art baselines across four diverse benchmarks while using only 2.5% of the training data
  • End-to-end training time is reduced by 5–10x compared to existing iterative model-aware data selection methods, demonstrating both quality and efficiency gains simultaneously

Breakthrough Assessment

7/10 LEAD addresses a real and significant bottleneck in practical LLM instruction tuning by reformulating costly iterative data selection into an in-loop computation, achieving substantial efficiency and performance gains. While the individual components (uncertainty estimation, bandits, curriculum learning) are not entirely novel, their combination and the framing of the problem as an in-loop estimation challenge represent a meaningful and practically impactful advance.

Methodology

  1. During training, compute IDU for each sample using instantaneous loss, a gradient-based approximation of how the loss will change after the current update, and an exponential moving average of historical losses—all derived from standard forward/backward passes without extra inference
  2. At the cluster level, apply a multi-armed bandit algorithm that treats data clusters as arms and adaptively allocates selection budget toward clusters with higher estimated informativeness, enabling coarse-grained filtering of the large dataset
  3. Within selected clusters, apply fine-grained IDU-based ranking to identify the highest-utility individual samples for the next training iteration, iteratively repeating this process throughout training to maintain an adaptive, model-aware data curriculum

System Components

Instance-Level Dynamic Uncertainty (IDU)

A utility function computed in-loop that combines current training loss, a gradient-based first-order approximation of expected loss change, and an exponential smoothing term over historical losses to capture both immediate and longitudinal sample informativeness

Multi-Armed Bandit Cluster Selector

A coarse-grained selection mechanism that models data clusters as bandit arms, using accumulated reward signals to adaptively prioritize clusters likely to contain high-utility samples, reducing the search space for fine-grained selection

Coarse-to-Fine Selection Pipeline

A two-stage framework that first filters at the cluster level via the bandit mechanism, then applies IDU at the instance level within selected clusters, balancing computational efficiency with selection precision at scale

In-Loop Utility Estimation

The architectural principle of LEAD that reformulates sample utility computation to occur entirely within standard training forward/backward passes, eliminating the need for separate full-dataset inference rounds between training iterations

Results

Metric/Benchmark Prior SOTA (iterative) LEAD (2.5% data) Delta
Avg. performance (4 benchmarks) Baseline SOTA +6.1% to +10.8% +6.1–10.8%
Training data used 10–20% (typical) 2.5% ~4–8x less data
Overall training time Full iterative cost 5–10x faster 80–90% reduction
Additional inference cost Full dataset per iteration Zero Eliminated entirely

Key Takeaways

  • Practitioners can dramatically reduce instruction tuning costs (5–10x speedup, 2.5% data) without sacrificing—and in fact improving—model quality by replacing full-dataset inference-based selection with in-loop IDU estimation
  • The coarse-to-fine bandit + IDU architecture provides a scalable blueprint for data-efficient training on large corpora: cluster-level bandit exploration handles scale, while instance-level gradient-informed scoring ensures quality
  • IDU's combination of instantaneous loss, gradient-predicted loss change, and historical smoothing is a generalizable signal design that could be adapted to other continual learning, curriculum learning, or active learning settings beyond instruction tuning

Abstract

Instruction tuning has emerged as a critical paradigm for improving the capabilities and alignment of large language models (LLMs). However, existing iterative model-aware data selection methods incur significant computational overhead, as they rely on repeatedly performing full-dataset model inference to estimate sample utility for subsequent training iterations, creating a fundamental efficiency bottleneck. In this paper, we propose LEAD, an efficient iterative data selection framework that accurately estimates sample utility entirely within the standard training loop, eliminating the need for costly additional model inference. At its core, LEAD introduces Instance-Level Dynamic Uncertainty (IDU), a theoretically grounded utility function combining instantaneous training loss, gradient-based approximation of loss changes, and exponential smoothing of historical loss signals. To further scale efficiently to large datasets, LEAD employs a two-stage, coarse-to-fine selection strategy, adaptively prioritizing informative clusters through a multi-armed bandit mechanism, followed by precise fine-grained selection of high-utility samples using IDU. Extensive experiments across four diverse benchmarks show that LEAD significantly outperforms state-of-the-art methods, improving average model performance by 6.1%-10.8% while using only 2.5% of the training data and reducing overall training time by 5-10x.

Generated on 2026-03-03 using Claude