It is well-known that in many applications erroneous predictions of one type or another must be avoided. In some applications, like spam detection, false positive errors are serious problems. In other applications, like medical diagnosis, abstaining from making a prediction may be more desirable than making an incorrect prediction. In this paper we consider different types of reliable classifiers suited for such situations. We formalize and study properties of reliable classifiers in the spirit of agnostic learning (Haussler, 1992; Kearns, Schapire, and Sellie, 1994), a PAC-like model where no assumption is made on the function being learned. We then give two algorithms for reliable agnostic learning under natural distributions. The first reliably learns DNF formulas with no false positives using membership queries. The second reliably learns halfspaces from random examples with no false positives or false negatives, but the classifier sometimes abstains from making predictions.
|State||Published - 2009|
|Event||22nd Conference on Learning Theory, COLT 2009 - Montreal, QC, Canada|
Duration: 18 Jun 2009 → 21 Jun 2009
|Conference||22nd Conference on Learning Theory, COLT 2009|
|Period||18/06/09 → 21/06/09|