A main component of many Data Science applications is the invocation of Machine Learning (ML) classifiers. The typical complexity of these classification models makes it difficult to understand the reason for a result, and consequently to assess its trustworthiness and detect errors. We propose a simple generic approach for explaining classifications, by identifying relevant parts of the input whose perturbation would be significant in affecting the classification. In contrast to previous work, our solution makes use of constraints over the data, to guide the search for meaningful explanations in the application domain. Constraints may either be derived from the schema or specified by a domain expert for the purpose of computing explanations. We have implemented the approach for prominent ML models such as Random Forests and Neural Networks. We demonstrate, through examples and experiments, the effectiveness of our solution, and in particular of its novel use of constraints.