Regularized Classification-Aware Quantization

Daniel Severo*, Elad Domanovitz, Ashish Khisti

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Traditionally, quantization is designed to minimize the reconstruction error of a data source. When considering downstream classification tasks, other measures of distortion can be of interest, such as the 0-1 classification loss. Furthermore, it is desirable that the performance of these quantizers does not deteriorate once they are deployed into production, as re-learning the scheme online is not always possible. In this chapter, we present a class of algorithms that learn distributed quantization schemes for binary classification tasks. Our method performs well on unseen data and is faster than previous methods proportional to a quadratic term of the dataset size. It works by regularizing the 0-1 loss with the reconstruction error. We present experiments on synthetic mixture and bivariate Gaussian data and compare training, testing, and generalization errors with a family of benchmark quantization schemes from the literature. Our method is called Regularized Classification-Aware Quantization.

Original languageEnglish
Title of host publicationSignals and Communication Technology
PublisherSpringer Science and Business Media Deutschland GmbH
Pages61-73
Number of pages13
DOIs
StatePublished - 2022
Externally publishedYes

Publication series

NameSignals and Communication Technology
ISSN (Print)1860-4862
ISSN (Electronic)1860-4870

Keywords

  • Classification
  • Distributed quantization
  • Generalization
  • Regularization

Fingerprint

Dive into the research topics of 'Regularized Classification-Aware Quantization'. Together they form a unique fingerprint.

Cite this