DART: Dropouts meet multiple additive regression trees

K. V. Rashmi, Ran Gilad-Bachrach

Research output: Contribution to journalConference articlepeer-review

115 Scopus citations

Abstract

MART (Friedman, 2001, 2002), an ensemble model of boosted regression trees, is known to deliver high prediction accuracy for diverse tasks, and it is widely used in practice. However, it suffers an issue which we call over-specialization, wherein trees added at later iterations tend to impact the prediction of only a few instances, and make negligible contribution towards the remaining instances. This negatively affects the performance of the model on unseen data, and also makes the model over-sensitive to the contributions of the few, initially added tress. We show that the commonly used tool to address this issue, that of shrinkage, alleviates the problem only to a certain extent and the fundamental issue of over-specialization still remains. In this work, we explore a different approach to address the problem that of employing dropouts, a tool that has been recently proposed in the context of learning deep neural networks (Hinton et al., 2012). We propose a novel way of employing dropouts in MART, resulting in the DART algorithm. We evaluate DART on ranking, regression and classification tasks, using large scale, publicly available datasets, and show that DART outperforms MART in each of the tasks, with a significant margin. We also show that DART overcomes the issue of over-specialization to a considerable extent.

Original languageEnglish
Pages (from-to)489-497
Number of pages9
JournalJournal of Machine Learning Research
Volume38
StatePublished - 2015
Externally publishedYes
Event18th International Conference on Artificial Intelligence and Statistics, AISTATS 2015 - San Diego, United States
Duration: 9 May 201512 May 2015

Fingerprint

Dive into the research topics of 'DART: Dropouts meet multiple additive regression trees'. Together they form a unique fingerprint.

Cite this