Training-Time Attacks against k-Nearest Neighbors

Ara Vartanian, Will Rosenbaum, Scott Alfeld

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Nearest neighbor-based methods are commonly used for classification tasks and as subroutines of other data-analysis methods. An attacker with the capability of inserting their own data points into the training set can manipulate the inferred nearest neighbor structure. We distill this goal to the task of performing a training-set data insertion attack against k-Nearest Neighbor classification (kNN). We prove that computing an optimal training-time (a.k.a. poisoning) attack against kNN classification is NP-Hard, even when k = 1 and the attacker can insert only a single data point. We provide an anytime algorithm to perform such an attack, and a greedy algorithm for general k and attacker budget. We provide theoretical bounds and empirically demonstrate the effectiveness and practicality of our methods on synthetic and real-world datasets. Empirically, we find that kNN is vulnerable in practice and that dimensionality reduction is an effective defense. We conclude with a discussion of open problems illuminated by our analysis.

Original languageEnglish
Title of host publicationAAAI-23 Technical Tracks 8
EditorsBrian Williams, Yiling Chen, Jennifer Neville
PublisherAAAI press
Number of pages8
ISBN (Electronic)9781577358800
StatePublished - 27 Jun 2023
Externally publishedYes
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: 7 Feb 202314 Feb 2023

Publication series

NameProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023


Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States


Dive into the research topics of 'Training-Time Attacks against k-Nearest Neighbors'. Together they form a unique fingerprint.

Cite this