← Back to Papers

TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate

A. Zandieh, Majid Daliri, Majid Hadian, V. Mirrokni
arXiv.org | 2025
TurboQuant is a data-oblivious online vector quantization algorithm that achieves near-optimal distortion rates (within ~2.7x of information-theoretic lower bounds) for both MSE and inner product metrics across all bit-widths and dimensions.

Problem Statement

Existing vector quantization methods fail to achieve optimal distortion rates, particularly for inner product preservation which is critical for LLM KV cache compression and nearest neighbor search. Current approaches either require expensive data-dependent training (like product quantization) or introduce bias in inner product estimation. There is also a lack of formal proofs connecting practical algorithms to information-theoretic lower bounds.

Key Novelty

  • Data-oblivious random rotation that induces a concentrated Beta distribution on coordinates, enabling near-independence of coordinates and allowing optimal scalar quantization per coordinate
  • Two-stage unbiased inner product quantizer combining an MSE quantizer with a 1-bit Quantized Johnson-Lindenstrauss (QJL) transform applied to the residual to eliminate bias
  • Formal information-theoretic lower bounds proof for best achievable distortion rate by any vector quantizer, with TurboQuant shown to be within a constant factor (~2.7) of these bounds

Evaluation Highlights

  • KV cache quantization achieves absolute quality neutrality at 3.5 bits per channel and only marginal quality degradation at 2.5 bits per channel
  • Nearest neighbor search outperforms existing product quantization techniques in recall while reducing indexing time to virtually zero due to the data-oblivious nature of the algorithm

Breakthrough Assessment

7/10 TurboQuant provides a theoretically grounded, near-optimal, and practically efficient quantization method with formal lower bound proofs — a significant advance over heuristic product quantization approaches. However, it is an improvement within a well-established field rather than a paradigm shift.

Methodology

  1. Step 1: Apply a random rotation matrix to the input high-dimensional vector, inducing a concentrated Beta distribution on its coordinates and establishing near-independence between distinct coordinates
  2. Step 2: Apply optimal scalar quantizers independently per coordinate for MSE minimization, leveraging the near-independence property to treat each coordinate separately without significant cross-coordinate distortion
  3. Step 3: For unbiased inner product estimation, apply a 1-bit Quantized JL (QJL) transform to the quantization residual from Step 2, combining the two outputs for a two-stage unbiased inner product quantizer

System Components

Random Rotation Module

Transforms input vectors via a random rotation to induce a concentrated Beta distribution on coordinates, enabling near-independence of coordinates in high dimensions

Optimal Scalar Quantizer

Applies per-coordinate scalar quantization optimized for MSE, exploiting the near-independence of rotated coordinates to achieve near-optimal vector quantization

1-bit QJL Residual Stage

Applies a 1-bit Quantized Johnson-Lindenstrauss transform to the MSE quantization residual to correct bias and produce an unbiased inner product estimator

Information-Theoretic Lower Bound Proof

Formal derivation of the best achievable distortion rate for any vector quantizer, providing a theoretical benchmark showing TurboQuant is within ~2.7x of optimal

Results

Metric/Benchmark Baseline This Paper Delta
KV Cache Quality (bits/channel) ~4+ bits for quality neutrality 3.5 bits (quality neutral) ~0.5+ bit reduction
KV Cache Quality (aggressive compression) Significant degradation at ~3 bits Marginal degradation at 2.5 bits ~0.5 bit improvement
ANN Recall vs Product Quantization Lower recall with indexing overhead Higher recall Improved recall
ANN Indexing Time Non-trivial (data-dependent training) ~Zero (data-oblivious) Near-zero indexing cost
Distortion vs Theoretical Optimum Varies, often far from optimal Within ~2.7x of lower bound Near-optimal across all bit-widths

Key Takeaways

  • TurboQuant enables KV cache compression to 2.5-3.5 bits per channel with minimal quality loss, making it highly practical for memory-efficient LLM inference without any data-dependent calibration
  • The data-oblivious nature (no training or dataset required) means TurboQuant can be dropped into any pipeline as an online quantizer with virtually zero indexing overhead, unlike product quantization which requires expensive offline training
  • The two-stage MSE + QJL residual approach is a key design pattern for practitioners: use MSE quantization for storage efficiency, then apply a lightweight residual correction to recover unbiased inner product estimates critical for attention and retrieval tasks

Abstract

Vector quantization, a problem rooted in Shannon's source coding theory, aims to quantize high-dimensional Euclidean vectors while minimizing distortion in their geometric structure. We propose TurboQuant to address both mean-squared error (MSE) and inner product distortion, overcoming limitations of existing methods that fail to achieve optimal distortion rates. Our data-oblivious algorithms, suitable for online applications, achieve near-optimal distortion rates (within a small constant factor) across all bit-widths and dimensions. TurboQuant achieves this by randomly rotating input vectors, inducing a concentrated Beta distribution on coordinates, and leveraging the near-independence property of distinct coordinates in high dimensions to simply apply optimal scalar quantizers per each coordinate. Recognizing that MSE-optimal quantizers introduce bias in inner product estimation, we propose a two-stage approach: applying an MSE quantizer followed by a 1-bit Quantized JL (QJL) transform on the residual, resulting in an unbiased inner product quantizer. We also provide a formal proof of the information-theoretic lower bounds on best achievable distortion rate by any vector quantizer, demonstrating that TurboQuant closely matches these bounds, differing only by a small constant ($\approx 2.7$) factor. Experimental results validate our theoretical findings, showing that for KV cache quantization, we achieve absolute quality neutrality with 3.5 bits per channel and marginal quality degradation with 2.5 bits per channel. Furthermore, in nearest neighbor search tasks, our method outperforms existing product quantization techniques in recall while reducing indexing time to virtually zero.

Generated on 2026-04-01 using Claude