Interpreting neural-network results: A simulation study

Orna Intrator, Nathan Intrator

Research output: Contribution to journalArticlepeer-review


Artificial neural networks (ANN) seem very promising for regression and classification, especially for large covariate spaces. Yet, their usefulness for medical and social research is limited because they present only prediction results and do not present features of the underlying process relating the inputs to the output. ANNs approximate a non-linear function by a composition of low-dimensional ridge functions, and therefore appear to be less sensitive to the dimensionality of the covariate space. However, due to non-uniqueness of a global minimum and the existence of (possibly) many local minima, the model revealed by the network is non-stable. We introduce a method that demonstrates the effects of inputs on output of ANNs by using novel robustification techniques. Simulated data from known models are used to demonstrate the interpretability results of the ANNs. Graphical tools are used for studying the interpretation results, and for detecting interactions between covariates. The effects of different regularization methods on the robustness of the interpretation are discussed; in particular we note that ANNs must include skip layer connections. An application to an ANN model predicting 5-yr mortality following breast cancer diagnosis is presented. We conclude that neural networks estimated with sufficient regularization can be reliably interpreted using the method presented in this paper.

Original languageEnglish
Pages (from-to)373-393
Number of pages21
JournalComputational Statistics and Data Analysis
Issue number3
StatePublished - 28 Sep 2001


  • Data mining tools
  • Interaction effects
  • Logistic regression
  • Nonlinear models
  • Splitlevel plots


Dive into the research topics of 'Interpreting neural-network results: A simulation study'. Together they form a unique fingerprint.

Cite this