Large ensemble averaging

David Horn*, Ury Naftaly, Nathan Intrator

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

2 Scopus citations

Abstract

Averaging over many predictors leads to a reduction of the variance portion of the error. We present a method for evaluating the mean squared error of an infinite ensemble of predictors from finite (small size) ensemble information. We demonstrate it on ensembles of networks with different initial choices of synaptic weights. We find that the optimal stopping criterion for large ensembles occurs later in training time than for single networks. We test our method on the suspots data set and obtain excellent results.

Original languageEnglish
Title of host publicationNeural Networks
Subtitle of host publicationTricks of the Trade
PublisherSpringer Verlag
Pages131-137
Number of pages7
ISBN (Print)9783642352881
DOIs
StatePublished - 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7700 LECTURE NO
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Large ensemble averaging'. Together they form a unique fingerprint.

Cite this