Abstract
Can a deep neural network be approximated by a small decision tree based on simple features? This question and its variants are behind the growing demand for machine learning models that are interpretable by humans. In this work we study such questions by introducing interpretable approximations, a notion that captures the idea of approximating a target concept c by a small aggregation of concepts from some base class H. In particular, we consider the approximation of a binary concept c by decision trees based on a simple class H (e.g., of bounded VC dimension), and use the tree depth as a measure of complexity. Our primary contribution is the following remarkable trichotomy. For any given pair of H and c, exactly one of these cases holds: (i) c cannot be approximated by H with arbitrary accuracy; (ii) c can be approximated by H with arbitrary accuracy, but there exists no universal rate that bounds the complexity of the approximations as a function of the accuracy; or (iii) there exists a constant κ that depends only on H and c such that, for any data distribution and any desired accuracy level, c can be approximated by H with a complexity not exceeding κ. This taxonomy stands in stark contrast to the landscape of supervised classification, which offers a complex array of distribution-free and universally learnable scenarios. We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-free) complexity. We extend our trichotomy to classes H of unbounded VC dimension and give characterizations of interpretability based on the algebra generated by H.
Original language | English |
---|---|
Pages (from-to) | 648-668 |
Number of pages | 21 |
Journal | Proceedings of Machine Learning Research |
Volume | 247 |
State | Published - 2024 |
Event | 37th Annual Conference on Learning Theory, COLT 2024 - Edmonton, Canada Duration: 30 Jun 2024 → 3 Jul 2024 |
Funding
Funders | Funder number |
---|---|
European Research Council Executive Agency | |
Yandex Initiative for Machine Learning | |
Technion Center for Machine Learning and Intelligent Systems | |
Future Artificial Intelligence Research | |
MLIS | |
Israel PBC-VATAT | |
European Commission | |
Österreichischen Akademie der Wissenschaften | |
Tel Aviv University | |
European Research Council | |
EU Horizon CL4-2022-HUMAN-02 research and innovation action | 101120237 |
GENERALIZATION | 101039692 |
Horizon 2020 | 882396 |
United States-Israel Binational Science Foundation | 2018385 |
Israel Science Foundation | 1225/20 |
Keywords
- boosting
- interpretability
- learning theory