Label efficient learning by exploiting multi-class output codes

Maria Florina Balcan, Travis Dick, Yishay Mansour

Research output: Contribution to conferencePaperpeer-review

1 Scopus citations

Abstract

We present a new perspective on the popular multi-class algorithmic techniques of one-vs-all and error correcting output codes. Rather than studying the behavior of these techniques for supervised learning, we establish a connection between the success of these methods and the existence of labelefficient learning procedures. We show that in both the realizable and agnostic cases, if output codes are successful at learning from labeled data, they implicitly assume structure on how the classes are related. By making that structure explicit, we design learning algorithms to recover the classes with low label complexity. We provide results for the commonly studied cases of one-vs-all learning and when the codewords of the classes are well separated. We additionally consider the more challenging case where the codewords are not well separated, but satisfy a boundary features condition that captures the natural intuition that every bit of the codewords should be significant.

Original languageEnglish
Pages1735-1741
Number of pages7
StatePublished - 2017
Event31st AAAI Conference on Artificial Intelligence, AAAI 2017 - San Francisco, United States
Duration: 4 Feb 201710 Feb 2017

Conference

Conference31st AAAI Conference on Artificial Intelligence, AAAI 2017
Country/TerritoryUnited States
CitySan Francisco
Period4/02/1710/02/17

Funding

FundersFunder number
National Science FoundationIIS-1618714, CCF-1535967, CCF-1422910
Microsoft Research
Google

    Fingerprint

    Dive into the research topics of 'Label efficient learning by exploiting multi-class output codes'. Together they form a unique fingerprint.

    Cite this