How a 2021 Quantization Algorithm Quietly Outperforms Its 2026 Successor
A 2021 quantization algorithm called EDEN, developed by Amit Portnoy and colleagues, has been shown to outperform TurboQuant, a newer method presented at ICLR 2026. Despite TurboQuant-mse drawing significant attention, it is effectively a degenerate version of EDEN that omits an optimized scale factor, leading to higher reconstruction error. EDEN's analytically derived scaling provides consistent improvements in both biased and unbiased quantization settings, especially at practical bit-widths and dimensions.
- ▪EDEN, introduced in 2021, uses random rotation, scalar quantization, optimized scaling, and inverse rotation to compress vectors efficiently.
- ▪TurboQuant-mse, despite being published in 2026, skips the optimized scaling step, resulting in higher MSE compared to EDEN-biased.
- ▪EDEN-unbiased ensures the decompressed vector is correct on average, making it suitable for applications involving averaging of quantized vectors like distributed training.
Opening excerpt (first ~120 words) tap to expand
Deep Learning How a 2021 Quantization Algorithm Quietly Outperforms Its 2026 Successor One scale parameter determines accuracy in rotation-based vector quantization. Amit Portnoy May 2, 2026 7 min read Share Image by author, post-processed with ChatGPT TurboQuant [3], an online vector quantization method, drew wide public attention at ICLR 2026. For me, it looked very familiar: it overlaps heavily with EDEN, a quantization method first introduced as the 1-bit method DRIVE at NeurIPS 2021 [1] and generalized to arbitrary bit-widths at ICML 2022 [2]. Co-authored by myself, with Ran Ben-Basat, Yaniv Ben-Itzhak, Gal Mendelson, Michael Mitzenmacher, and Shay Vargaftik. The TurboQuant paper presents two variants: TurboQuant-mse and TurboQuant-prod.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Towards Data Science.