TY - JOUR
T1 - Computational sample complexity
AU - Decatur, Scott E.
AU - Goldreich, Oded
AU - Ron, Dana
PY - 2000
Y1 - 2000
N2 - In a variety of PAC learning models, a trade-off between time and information seems to exist: with unlimited time, a small amount of information suffices, but with time restrictions, more information sometimes seems to be required. In addition, it has long been known that there are concept classes that can be learned in the absence of computational restrictions, but (under standard cryptographic assumptions) cannot be learned in polynomial time (regardless of sample size). Yet, these results do not answer the question of whether there are classes for which learning from a small set of examples is computationally infeasible, but becomes feasible when the learner has access to (polynomially) more examples. To address this question, we introduce a new measure of learning complexity called computational sample complexity that represents the number of examples sufficient for polynomial time learning with respect to a fixed distribution. We then show concept classes that (under similar cryptographic assumptions) possess arbitrarily sized gaps between their standard (information-theoretic) sample complexity and their computational sample complexity. We also demonstrate such gaps for learning from membership queries and learning from noisy examples.
AB - In a variety of PAC learning models, a trade-off between time and information seems to exist: with unlimited time, a small amount of information suffices, but with time restrictions, more information sometimes seems to be required. In addition, it has long been known that there are concept classes that can be learned in the absence of computational restrictions, but (under standard cryptographic assumptions) cannot be learned in polynomial time (regardless of sample size). Yet, these results do not answer the question of whether there are classes for which learning from a small set of examples is computationally infeasible, but becomes feasible when the learner has access to (polynomially) more examples. To address this question, we introduce a new measure of learning complexity called computational sample complexity that represents the number of examples sufficient for polynomial time learning with respect to a fixed distribution. We then show concept classes that (under similar cryptographic assumptions) possess arbitrarily sized gaps between their standard (information-theoretic) sample complexity and their computational sample complexity. We also demonstrate such gaps for learning from membership queries and learning from noisy examples.
KW - Computational learning theory
KW - Error correcting codes
KW - Information vs. efficient computation
KW - Pseudorandom functions
KW - Wire-tap channel
UR - http://www.scopus.com/inward/record.url?scp=0033296083&partnerID=8YFLogxK
U2 - 10.1137/S0097539797325648
DO - 10.1137/S0097539797325648
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:0033296083
SN - 0097-5397
VL - 29
SP - 854
EP - 879
JO - SIAM Journal on Computing
JF - SIAM Journal on Computing
IS - 3
ER -