Constraints-based explanations of classifications

Daniel Deutch, Nave Frost

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations


A main component of many Data Science applications is the invocation of Machine Learning (ML) classifiers. The typical complexity of these classification models makes it difficult to understand the reason for a result, and consequently to assess its trustworthiness and detect errors. We propose a simple generic approach for explaining classifications, by identifying relevant parts of the input whose perturbation would be significant in affecting the classification. In contrast to previous work, our solution makes use of constraints over the data, to guide the search for meaningful explanations in the application domain. Constraints may either be derived from the schema or specified by a domain expert for the purpose of computing explanations. We have implemented the approach for prominent ML models such as Random Forests and Neural Networks. We demonstrate, through examples and experiments, the effectiveness of our solution, and in particular of its novel use of constraints.

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE 35th International Conference on Data Engineering, ICDE 2019
PublisherIEEE Computer Society
Number of pages12
ISBN (Electronic)9781538674741
StatePublished - Apr 2019
Event35th IEEE International Conference on Data Engineering, ICDE 2019 - Macau, China
Duration: 8 Apr 201911 Apr 2019

Publication series

NameProceedings - International Conference on Data Engineering
ISSN (Print)1084-4627


Conference35th IEEE International Conference on Data Engineering, ICDE 2019


FundersFunder number
Intel Corporation
Israel Science Foundation


    • Data provenance
    • Database constraints theory
    • Supervised learning by classification


    Dive into the research topics of 'Constraints-based explanations of classifications'. Together they form a unique fingerprint.

    Cite this