Model selection via the AUC

Saharon Rosset*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We present a statistical analysis of the AUC as an evaluation criterion for classification scoring models. First, we consider significance tests for the difference between AUC scores of two algorithms on the same test set. We derive exact moments under simplifying assumptions and use them to examine approximate practical methods from the literature. We then compare AUC to empirical misclassification error when the prediction goal is to minimize future error rate. We show that the AUC may be preferable to empirical error even in this case and discuss the tradeoff between approximation error and estimation error underlying this phenomenon.

Original languageEnglish
Title of host publicationProceedings, Twenty-First International Conference on Machine Learning, ICML 2004
EditorsR. Greiner, D. Schuurmans
Pages703-710
Number of pages8
StatePublished - 2004
Externally publishedYes
EventProceedings, Twenty-First International Conference on Machine Learning, ICML 2004 - Banff, Alta, Canada
Duration: 4 Jul 20048 Jul 2004

Publication series

NameProceedings, Twenty-First International Conference on Machine Learning, ICML 2004

Conference

ConferenceProceedings, Twenty-First International Conference on Machine Learning, ICML 2004
Country/TerritoryCanada
CityBanff, Alta
Period4/07/048/07/04

Fingerprint

Dive into the research topics of 'Model selection via the AUC'. Together they form a unique fingerprint.

Cite this