From soft classifiers to hard decisions: How fair can we be?

Ran Canetti, Govind Ramnarayan, Aloni Cohen, Sarah Scheffler, Nishanth Dikkala, Adam Smith

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

29 Scopus citations

Abstract

A popular methodology for building binary decision-making classifiers in the presence of imperfect information is to first construct a calibrated non-binary “scoring" classifier, and then to post-process this score to obtain a binary decision. We study various fairness (or, error-balance) properties of this methodology, when the non-binary scores are calibrated over all protected groups, and with a variety of post-processing algorithms. Specifically, we show: First, there does not exist a general way to post-process a calibrated classifier to equalize protected groups' positive or negative predictive value (PPV or NPV). For certain "nice" calibrated classifiers, either PPV or NPV can be equalized when the post-processor uses different thresholds across protected groups. Still, when the post-processing consists of a single global threshold across all groups, natural fairness properties, such as equalizing PPV in a nontrivial way, do not hold even for "nice" classifiers. Second, when the post-processing stage is allowed to defer on some decisions (that is, to avoid making a decision by handing off some examples to a separate process), then for the non-deferred decisions, the resulting classifier can be made to equalize PPV, NPV, false positive rate (FPR) and false negative rate (FNR) across the protected groups. This suggests a way to partially evade the impossibility results of Chouldechova and Kleinberg et al., which preclude equalizing all of these measures simultaneously. We also present different deferring strategies and show how they affect the fairness properties of the overall system. We evaluate our post-processing techniques using the COMPAS data set from 2016.

Original languageEnglish
Title of host publicationFAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
PublisherAssociation for Computing Machinery, Inc
Pages309-318
Number of pages10
ISBN (Electronic)9781450361255
DOIs
StatePublished - 29 Jan 2019
Externally publishedYes
Event2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019 - Atlanta, United States
Duration: 29 Jan 201931 Jan 2019

Publication series

NameFAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency

Conference

Conference2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019
Country/TerritoryUnited States
CityAtlanta
Period29/01/1931/01/19

Funding

FundersFunder number
Clare Boothe Luce
Alfred P. Sloan Foundation
Office of Naval ResearchCCF-1665252, DMS-1737944, N00014-12-1-0999
National Science Foundation1413920, 1801564, 1763786, CCF-1617730, CNS-1413920, IIS-1447700, AF-1763786, CCF-1650733
Israel Science Foundation1523/14
Intelligence Community Postdoctoral Research Fellowship Program1414119

    Keywords

    • Algorithmic fairness
    • Classification
    • Post-processing

    Fingerprint

    Dive into the research topics of 'From soft classifiers to hard decisions: How fair can we be?'. Together they form a unique fingerprint.

    Cite this