Learnability Gaps of Strategic Classification

Lee Cohen, Yishay Mansour, Shay Moran, Han Shao

Research output: Contribution to journalConference articlepeer-review

Abstract

In contrast with standard classification tasks, strategic classification involves agents strategically modifying their features in an effort to receive favorable predictions. For instance, given a classifier determining loan approval based on credit scores, applicants may open or close their credit cards and bank accounts to fool the classifier. The learning goal is to find a classifier robust against strategic manipulations. Various settings, based on what and when information is known, have been explored in strategic classification. In this work, we focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning. We essentially show that any learnable class is also strategically learnable: we first consider a fully informative setting, where the manipulation structure (which is modeled by a manipulation graph G*) is known and during training time the learner has access to both the pre-manipulation data and post-manipulation data. We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results. Then, we relax the fully informative setting by introducing two natural types of uncertainty. First, following Ahmadi et al. (2023), we consider the setting in which the learner only has access to the post-manipulation data. We improve the results of Ahmadi et al. (2023) and close the gap between mistake upper bound and lower bound raised by them. Our second relaxation of the fully informative setting introduces uncertainty to the manipulation structure. That is, we assume that the manipulation graph is unknown but belongs to a known class of graphs. We provide nearly tight bounds on the learning complexity in various unknown manipulation graph settings. Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.

Original languageEnglish
Pages (from-to)1223-1259
Number of pages37
JournalProceedings of Machine Learning Research
Volume247
StatePublished - 2024
Event37th Annual Conference on Learning Theory, COLT 2024 - Edmonton, Canada
Duration: 30 Jun 20243 Jul 2024

Funding

FundersFunder number
European Research Council Executive Agency
Yandex Initiative for Machine Learning
Israel Science Foundation
Technion Center for Machine Learning and Intelligent Systems
MLIS
European Commission
Tel Aviv University
European Research Council
Simons Foundation
Horizon 2020882396
Horizon 2020
GENERALIZATION101039692
Iowa Science Foundation1225/20
Iowa Science Foundation
National Science FoundationCCF-2212968, ECCS-2216899
National Science Foundation
Bloom's Syndrome Foundation2018385
Bloom's Syndrome Foundation
Alfred P. Sloan Foundation2020-13941, 689988
Alfred P. Sloan Foundation
Defense Advanced Research Projects AgencyHR00112020003
Defense Advanced Research Projects Agency

    Keywords

    • Littlestone dimension
    • mistake bound in online learning
    • PAC learning
    • strategic classification
    • VC dimension

    Fingerprint

    Dive into the research topics of 'Learnability Gaps of Strategic Classification'. Together they form a unique fingerprint.

    Cite this