Computational sample complexity

Scott Decatur*, Oded Goldreich, Dana Ron

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

3 Scopus citations

Abstract

In a variety of PAC learning models, a tradeoff between time and information seems to exist - with unlimited time, a small amount of information suffices, but with time restrictions, more information sometimes seems to be required. In addition, it has long been known that there are concept classes that can be learned in the absence of computational restrictions, but (under standard cryptographic assumptions) cannot be learned in polynomial time regardless of sample size. Yet, these results do not answer the question of whether there are classes for which learning from a small set of examples is infeasible, but becomes feasible when the learner has access to (polynomially) more examples. To address this question, we introduce a new measure of learning complexity called computational sample complexity which represents the number of examples sufficient for polynomial time learning with respect to a fixed distribution. We then show concept classes that (under similar cryptographic assumptions) possess arbitrary sized gaps between their standard (information-theoretic) sample complexity and their computational sample complexity. We also demonstrate such gaps for learning from membership queries and learning from noisy examples.

Original languageEnglish
Pages130-142
Number of pages13
DOIs
StatePublished - 1997
Externally publishedYes
EventProceedings of the 1997 10th Annual Conference on Computational Learning Theory - Nashville, TN, USA
Duration: 6 Jul 19979 Jul 1997

Conference

ConferenceProceedings of the 1997 10th Annual Conference on Computational Learning Theory
CityNashville, TN, USA
Period6/07/979/07/97

Fingerprint

Dive into the research topics of 'Computational sample complexity'. Together they form a unique fingerprint.

Cite this