Deep Individual Active Learning: Safeguarding against Out-of-Distribution Challenges in Neural Networks

Shachar Shayovitz*, Koby Bibas, Meir Feder

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Active learning (AL) is a paradigm focused on purposefully selecting training data to enhance a model’s performance by minimizing the need for annotated samples. Typically, strategies assume that the training pool shares the same distribution as the test set, which is not always valid in privacy-sensitive applications where annotating user data is challenging. In this study, we operate within an individual setting and leverage an active learning criterion which selects data points for labeling based on minimizing the min-max regret on a small unlabeled test set sample. Our key contribution lies in the development of an efficient algorithm, addressing the challenging computational complexity associated with approximating this criterion for neural networks. Notably, our results show that, especially in the presence of out-of-distribution data, the proposed algorithm substantially reduces the required training set size by up to 15.4%, 11%, and 35.1% for CIFAR10, EMNIST, and MNIST datasets, respectively.

Original languageEnglish
Article number129
JournalEntropy
Volume26
Issue number2
DOIs
StatePublished - Feb 2024

Keywords

  • active learning
  • deep active learning
  • individual sequences
  • normalized maximum likelihood
  • out-of-distribution
  • universal prediction

Fingerprint

Dive into the research topics of 'Deep Individual Active Learning: Safeguarding against Out-of-Distribution Challenges in Neural Networks'. Together they form a unique fingerprint.

Cite this