Learned Interpolation for Better Streaming Quantile Approximation with Worst-Case Guarantees

Nicholas Schiefer, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal, Tal Wagner

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


Abstract An ɛ-approximate quantile sketch over a stream of n inputs approximates the rank of any query point q—that is, the number of input points less than q—up to an additive error of ɛn, generally with some probability of at least 1 — 1/ poly(n), while consuming o(n) space. While the celebrated KLL sketch of Karnin, Lang, and Liberty achieves a provably optimal quantile approximation algorithm over worst-case streams, the approximations it achieves in practice are often far from optimal. Indeed, the most commonly used technique in practice is Dunning's t-digest, which often achieves much better approximations than KLL on real-world data but is known to have arbitrarily large errors in the worst case. We apply interpolation techniques to the streaming quantiles problem to attempt to achieve better approximations on real-world data sets than KLL while maintaining similar guarantees in the worst case.
Original languageEnglish
Title of host publicationSIAM Conference on Applied and Computational Discrete Algorithms (ACDA23)
Number of pages11
ISBN (Electronic)978-1-61197-771-4
StatePublished - 2023


Dive into the research topics of 'Learned Interpolation for Better Streaming Quantile Approximation with Worst-Case Guarantees'. Together they form a unique fingerprint.

Cite this