TY - JOUR
T1 - A Theory of Interpretable Approximations
AU - Bressan, Marco
AU - Cesa-Bianchi, Nicolò
AU - Esposito, Emmanuel
AU - Mansour, Yishay
AU - Moran, Shay
AU - Thiessen, Maximilian
N1 - Publisher Copyright:
© 2024 M. Bressan, N. Cesa-Bianchi, E. Esposito, Y. Mansour, S. Moran & M. Thiessen.
PY - 2024
Y1 - 2024
N2 - Can a deep neural network be approximated by a small decision tree based on simple features? This question and its variants are behind the growing demand for machine learning models that are interpretable by humans. In this work we study such questions by introducing interpretable approximations, a notion that captures the idea of approximating a target concept c by a small aggregation of concepts from some base class H. In particular, we consider the approximation of a binary concept c by decision trees based on a simple class H (e.g., of bounded VC dimension), and use the tree depth as a measure of complexity. Our primary contribution is the following remarkable trichotomy. For any given pair of H and c, exactly one of these cases holds: (i) c cannot be approximated by H with arbitrary accuracy; (ii) c can be approximated by H with arbitrary accuracy, but there exists no universal rate that bounds the complexity of the approximations as a function of the accuracy; or (iii) there exists a constant κ that depends only on H and c such that, for any data distribution and any desired accuracy level, c can be approximated by H with a complexity not exceeding κ. This taxonomy stands in stark contrast to the landscape of supervised classification, which offers a complex array of distribution-free and universally learnable scenarios. We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-free) complexity. We extend our trichotomy to classes H of unbounded VC dimension and give characterizations of interpretability based on the algebra generated by H.
AB - Can a deep neural network be approximated by a small decision tree based on simple features? This question and its variants are behind the growing demand for machine learning models that are interpretable by humans. In this work we study such questions by introducing interpretable approximations, a notion that captures the idea of approximating a target concept c by a small aggregation of concepts from some base class H. In particular, we consider the approximation of a binary concept c by decision trees based on a simple class H (e.g., of bounded VC dimension), and use the tree depth as a measure of complexity. Our primary contribution is the following remarkable trichotomy. For any given pair of H and c, exactly one of these cases holds: (i) c cannot be approximated by H with arbitrary accuracy; (ii) c can be approximated by H with arbitrary accuracy, but there exists no universal rate that bounds the complexity of the approximations as a function of the accuracy; or (iii) there exists a constant κ that depends only on H and c such that, for any data distribution and any desired accuracy level, c can be approximated by H with a complexity not exceeding κ. This taxonomy stands in stark contrast to the landscape of supervised classification, which offers a complex array of distribution-free and universally learnable scenarios. We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-free) complexity. We extend our trichotomy to classes H of unbounded VC dimension and give characterizations of interpretability based on the algebra generated by H.
KW - boosting
KW - interpretability
KW - learning theory
UR - http://www.scopus.com/inward/record.url?scp=85203673556&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85203673556
SN - 2640-3498
VL - 247
SP - 648
EP - 668
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 37th Annual Conference on Learning Theory, COLT 2024
Y2 - 30 June 2024 through 3 July 2024
ER -