Strategic Classification in the Dark

Ganesh Ghalme*, Vineet Nair*, Itay Eilat, Inbal Talgam-Cohen, Nir Rosenfeld

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Strategic classification studies the interaction between a classification rule and the strategic agents it governs. Under the assumption that the classifier is known, rational agents respond to it by manipulating their features. However, in many real-life scenarios of high-stake classification (e.g., credit scoring), the classifier is not revealed to the agents, which leads agents to attempt to learn the classifier and game it too. In this paper we generalize the strategic classification model to such scenarios. We define the “price of opacity” as the difference in prediction error between opaque and transparent strategy-robust classifiers, characterize it, and give a sufficient condition for this price to be strictly positive, in which case transparency is the recommended policy. Our experiments show how Hardt et al.'s robust classifier is affected by keeping agents in the dark.

Original languageEnglish
Title of host publicationProceedings of the 38th International Conference on Machine Learning, ICML 2021
PublisherML Research Press
Number of pages10
ISBN (Electronic)9781713845065
StatePublished - 2021
Externally publishedYes
Event38th International Conference on Machine Learning, ICML 2021 - Virtual, Online
Duration: 18 Jul 202124 Jul 2021

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


Conference38th International Conference on Machine Learning, ICML 2021
CityVirtual, Online


Dive into the research topics of 'Strategic Classification in the Dark'. Together they form a unique fingerprint.

Cite this